<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>rss.livelink.threads-in-node</title>
    <link>https://techcommunity.microsoft.com/t5/</link>
    <description>Microsoft Community Hub</description>
    <pubDate>Sun, 03 May 2026 13:33:19 GMT</pubDate>
    <dc:creator>Community</dc:creator>
    <dc:date>2026-05-03T13:33:19Z</dc:date>
    <item>
      <title>How MS Discovery Is Empowering Scientists to Do More</title>
      <link>https://techcommunity.microsoft.com/t5/azure-architecture-blog/how-ms-discovery-is-empowering-scientists-to-do-more/ba-p/4516670</link>
      <description>&lt;P data-line="2"&gt;Research and development has traditionally been a slow, sequential, and largely manual endeavour. Scientists formulate hypotheses, design experiments, run computations in constrained environments, and document results, each stage dependent on the last, each transition requiring human review and intervention. Knowledge is fragmented across systems, insights are bottlenecked by individual capacity, and the gap between hypothesis and actionable outcome can span weeks or months.&lt;/P&gt;
&lt;P data-line="4"&gt;For organisations tackling complex scientific and operational challenges, from drug discovery to industrial process optimisation, this pace of iteration is simply no longer acceptable.&lt;/P&gt;
&lt;P data-line="6"&gt;At Microsoft, we recently introduced&amp;nbsp;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/microsoft-discovery/" target="_blank"&gt;&lt;STRONG&gt;Microsoft Discovery&lt;/STRONG&gt;&lt;/A&gt;, a platform that I believe fundamentally changes this model. Much like Microsoft 365 transformed the way knowledge workers collaborate and create, Microsoft Discovery is designed to simplify and empower the way scientists and researchers work. It provides a unified, end-to-end platform that integrates advanced artificial intelligence, high-performance computing, and knowledge management to support the full scientific reasoning lifecycle: knowledge gathering, hypothesis generation, experiment design, simulation, results analysis, and documentation.&lt;/P&gt;
&lt;P data-line="8"&gt;In this article, I want to share how we used Microsoft Discovery to automate a real-world simulation workflow for a mining organisation and what that experience taught our team about the future of AI-augmented science.&lt;/P&gt;
&lt;P data-line="8"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 data-line="12"&gt;What Is Microsoft Discovery?&lt;/H2&gt;
&lt;P data-line="14"&gt;Microsoft Discovery is Microsoft's scientific AI platform, a solution designed to accelerate research and experimentation across the full innovation lifecycle. Rather than replacing scientific judgement, Discovery is designed to&amp;nbsp;&lt;STRONG&gt;amplify human expertise&lt;/STRONG&gt;, embedding AI assistance at each stage of the R&amp;amp;D process while maintaining governance, traceability, and scientific rigour.&lt;/P&gt;
&lt;H3 data-line="16"&gt;From Traditional R&amp;amp;D to AI-Augmented Science&lt;/H3&gt;
&lt;P data-line="18"&gt;To appreciate what Discovery enables, it is important to understand where it fits in.&lt;/P&gt;
&lt;P data-line="20"&gt;In the&amp;nbsp;&lt;STRONG&gt;traditional R&amp;amp;D model&lt;/STRONG&gt;, knowledge discovery centres on manual literature reviews and historical data analysis. Researchers individually search, read, and synthesise information which is a time-intensive process where discovery is limited by each person's capacity to locate and interpret relevant material. Hypothesis generation and experimental design are expert-led and largely manual. Computational experimentation, where it exists, runs in fixed or constrained environments with limited parallelism. Analysis and iteration follow the same sequential pattern: execute, review, document, repeat.&lt;/P&gt;
&lt;P data-line="22"&gt;&lt;STRONG&gt;Microsoft Discovery changes this fundamentally.&lt;/STRONG&gt;&amp;nbsp;In the AI-cloud-enabled model it provides:&lt;/P&gt;
&lt;UL data-line="24"&gt;
&lt;LI data-line="24"&gt;&lt;STRONG&gt;Knowledge synthesis at scale&lt;/STRONG&gt;&amp;nbsp;— Researchers can explore literature, historical experiments, and organisational knowledge through a single interface, with intelligent indexing surfacing insights faster than manual search could ever achieve.&lt;/LI&gt;
&lt;LI data-line="25"&gt;&lt;STRONG&gt;AI-assisted hypothesis generation&lt;/STRONG&gt;&amp;nbsp;— Collaborative human-and-AI workflows support hypothesis exploration and feasibility assessment, while final decisions remain with the scientist.&lt;/LI&gt;
&lt;LI data-line="26"&gt;&lt;STRONG&gt;Cloud-scale experimentation&lt;/STRONG&gt;&amp;nbsp;— Elastic compute and parallel processing allow simulations and experiments to run at scale, with integrated tracking and reproducibility built in.&lt;/LI&gt;
&lt;LI data-line="27"&gt;&lt;STRONG&gt;Continuous feedback and human-in-the-loop governance&lt;/STRONG&gt;&amp;nbsp;— Results are analysed and compared more rapidly, enabling faster iteration, with AI-generated insights reviewed and validated by researchers before action.&lt;/LI&gt;
&lt;LI data-line="28"&gt;&lt;STRONG&gt;Governed knowledge assets&lt;/STRONG&gt;&amp;nbsp;— Experiment lineage, outcomes, and best practices are captured as reusable, governed assets, supporting long-term organisational learning.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P data-line="30"&gt;The net effect is a transition from slow, manual, and fragmented research processes to an&amp;nbsp;&lt;STRONG&gt;agile, automated, and data-driven R&amp;amp;D model&lt;/STRONG&gt; — one that improves research efficiency, increases the return on innovation investment, and enables faster, higher-impact solutions to complex challenges. In high level, the research and deveolopment loop we discussed and how Microsoft Discovery enriches it show in the following diagram.&lt;/P&gt;
&lt;img /&gt;
&lt;H2 data-line="36"&gt;The Real-World Problem: Screening Thousands of Molecules&lt;/H2&gt;
&lt;P data-line="38"&gt;To bring this to life, let me walk you through a real-world use case we worked on recently. A mining organisation needed to identify the best-performing oxidant compounds for a chemical reaction central to their operations. We will be talking about only a workflow that sits squarely in the&amp;nbsp;&lt;STRONG&gt;simulation&lt;/STRONG&gt;&amp;nbsp;phase of the scientific loop — and it is a perfect example of the kind of work that Microsoft Discovery can strongly transform.&lt;/P&gt;
&lt;H3 data-line="40"&gt;How Scientists Did It Before&lt;/H3&gt;
&lt;P data-line="42"&gt;In the traditional process, scientists would begin by selecting candidate molecules from established molecular libraries based on characteristics identified through literature review. These libraries can contain thousands of molecules, each defined in standard molecular file formats (such as XYZ or CIF files) that describe their three-dimensional atomic structures.&lt;/P&gt;
&lt;P data-line="44"&gt;From there, a researcher would manually work through a multi-step pipeline:&lt;/P&gt;
&lt;OL data-line="46"&gt;
&lt;LI data-line="46"&gt;&lt;STRONG&gt;Pre-processing and preparation&lt;/STRONG&gt;: The selected molecular files are processed and prepared for quantum mechanical (QM) calculations. This involves filtering molecules based on properties like the types of metals present, electron count, and atomic weight — criteria that directly affect both the scientific relevance and the computational cost of the simulations. The output is a set of prepared input files (known as GJF files) ready for simulation.&lt;/LI&gt;
&lt;LI data-line="48"&gt;&lt;STRONG&gt;Running quantum mechanical simulations&lt;/STRONG&gt;: The prepared input files are submitted to a computational chemistry tool (Gaussian 16) to perform Density Functional Theory (DFT) calculations. These simulations compute the electronic structure and energy states of each molecule across different charge and multiplicity configurations. Crucially, each molecule requires multiple independent simulation runs, and the computational cost scales rapidly with molecular complexity. With thousands of candidate molecules, this step alone can involve thousands of individual simulation jobs.&lt;/LI&gt;
&lt;LI data-line="50"&gt;&lt;STRONG&gt;Collecting and post-processing results&lt;/STRONG&gt;: Once all simulations complete, the output log files are collected and processed. For each molecule, the lowest-energy charge and multiplicity combination is identified, and a set of quantum mechanical descriptors and classical molecular descriptors are extracted. These descriptors are then fed into a trained machine learning model to predict the&amp;nbsp;&lt;STRONG&gt;redox potential&lt;/STRONG&gt;&amp;nbsp;of each compound, a key metric that indicates how effectively a molecule can act as an oxidant in the target reaction.&lt;/LI&gt;
&lt;LI data-line="52"&gt;&lt;STRONG&gt;Summarisation and filtering&lt;/STRONG&gt;: Finally, the predicted redox potentials and other relevant characteristics are compiled into a summary, enabling researchers to identify the most promising candidates for further investigation and experimental validation.&lt;/LI&gt;
&lt;/OL&gt;
&lt;img /&gt;
&lt;P&gt;Every step in this pipeline required manual intervention: writing and adjusting scripts, verifying input and output files, monitoring job queues, handling failures, and stitching results together. A single researcher could easily spend days or weeks moving through this process — and any error at one stage meant going back and re-running subsequent steps.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 data-line="60"&gt;How We Automated This with Microsoft Discovery Agents&lt;/H2&gt;
&lt;P data-line="62"&gt;When we looked at this workflow through the lens of Microsoft Discovery, the opportunity was clear. The scientific reasoning, selecting which molecules to test, interpreting redox potential results, deciding what to investigate next, should remain with the researcher. But the&amp;nbsp;&lt;STRONG&gt;operational overhead&lt;/STRONG&gt;&amp;nbsp;of preparing files, submitting simulations, monitoring jobs, collecting results, and assembling summaries? That could be orchestrated by a team of AI agents.&lt;/P&gt;
&lt;H3 data-line="64"&gt;A Team of Agents, Working Together&lt;/H3&gt;
&lt;P data-line="66"&gt;We designed a multi-agent architecture within Microsoft Discovery to automate this simulation workflow end to end. Here is how the team of agents operates:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-line="70"&gt;&lt;STRONG&gt;Router Agent:&lt;/STRONG&gt; The entry point. When a researcher submits a request for example, asking to run QM calculations on a set of candidate molecules the Router Agent interprets the intent and orchestrates the downstream workflow.&lt;/P&gt;
&lt;P data-line="72"&gt;&lt;STRONG&gt;Planner Agent:&lt;/STRONG&gt;&amp;nbsp;Once the Router Agent identifies the task, the Planner Agent examines the input files provided by the researcher and formulates a step-by-step execution plan. It determines what needs to happen, in what order, and with what parameters, much like a project manager scoping out a piece of work.&lt;/P&gt;
&lt;P data-line="74"&gt;&lt;STRONG&gt;Gaussian Prep Agent:&lt;/STRONG&gt;&amp;nbsp;This agent handles the preparation step. It is intelligent enough to inspect the current molecular files, apply the necessary filtering criteria, and prepare them for simulation, generating the input files that the computational chemistry tool requires. What previously involved manual scripting and file-by-file verification is now handled autonomously. We used Microsoft Discovery tools to do the underlying execution with this agent.&lt;/P&gt;
&lt;P data-line="76"&gt;&lt;STRONG&gt;MPI Gaussian Agent:&lt;/STRONG&gt;&amp;nbsp;This is where the power of cloud-scale computing comes in. The Gaussian Agent submits the prepared simulation jobs and manages their execution using an MPI-based master-worker pattern. This approach enables &lt;STRONG&gt;massive parallel execution&lt;/STRONG&gt; scaling out across the cloud to run thousands of simulations concurrently rather than sequentially. Given that the candidate molecule libraries can contain thousands of entries, and each molecule may require multiple simulation runs, this parallel execution capability is transformative. What might have taken days in a constrained local environment can now complete in a fraction of the time.&lt;/P&gt;
&lt;P data-line="78"&gt;&lt;STRONG&gt;Redox Potential Agent: &lt;/STRONG&gt;Once the simulations are complete, this agent takes over. It processes the simulation outputs, identifies the optimal charge and multiplicity state for each molecule, extracts the relevant QM and classical descriptors, and runs them through the trained machine learning model to predict redox potentials.&lt;/P&gt;
&lt;P data-line="80"&gt;&lt;STRONG&gt;Summariser Agent:&lt;/STRONG&gt; The final agent in the chain. It maps the predicted redox potentials back to the original molecules, applies any additional filtering criteria, and produces a clean, structured summary a JSON file that the researcher can immediately use to identify the most promising candidates and take them forward into the next phase of their work.&lt;/P&gt;
&lt;H3 data-line="82"&gt;What the Researcher Experiences&lt;/H3&gt;
&lt;P data-line="84"&gt;From the scientist's perspective, the transformation is striking. Instead of spending days writing scripts, babysitting job queues, and manually stitching results together, they provide their input files and describe what they need. The agents take it from there planning, preparing, executing, processing, and summarising and deliver a curated output ready for scientific interpretation.&lt;/P&gt;
&lt;P data-line="86"&gt;The researcher's time is freed to focus on what matters most:&amp;nbsp;&lt;STRONG&gt;thinking critically about the science&lt;/STRONG&gt;. Which molecules look most promising? What does the redox potential distribution tell us? Should we adjust the filtering criteria and run another round? These are the high-value questions that require human expertise and now scientists can spend their time on exactly that, rather than on operational mechanics.&lt;/P&gt;
&lt;H2 data-line="90"&gt;The Bigger Picture: Accelerating the Entire Scientific Loop&lt;/H2&gt;
&lt;P data-line="92"&gt;It is important to note that this simulation workflow is just&amp;nbsp;&lt;STRONG&gt;one piece of the broader scientific loop&lt;/STRONG&gt;. The full cycle of scientific research, from initial knowledge gathering and literature review, through hypothesis generation, experimental design, simulation, results analysis, and documentation involves many stages, each of which can benefit from the same kind of AI-augmented approach.&lt;/P&gt;
&lt;P data-line="94"&gt;Microsoft Discovery is designed to support this entire cycle. In our project, we did not stop at simulation. We also explored how agents can accelerate the&amp;nbsp;&lt;STRONG&gt;knowledge gathering&lt;/STRONG&gt;&amp;nbsp;phase, helping researchers navigate vast bodies of literature and surface relevant prior work more efficiently. We looked at how AI can assist with&amp;nbsp;&lt;STRONG&gt;hypothesis generation and evaluation&lt;/STRONG&gt;, helping scientists reason about which directions are most promising before committing to expensive computations. And we examined how agents can support the&amp;nbsp;&lt;STRONG&gt;analysis and reporting&lt;/STRONG&gt; phases comparing results against hypotheses, generating visualisations, and even assisting with drafting research documents.&lt;/P&gt;
&lt;P data-line="96"&gt;What excites me most about Microsoft Discovery is not any single capability, but the&amp;nbsp;&lt;STRONG&gt;cumulative effect&lt;/STRONG&gt;&amp;nbsp;of embedding AI assistance across every stage of the research process. Each phase that gets faster and more efficient creates a multiplier effect on the phases that follow. When knowledge gathering takes hours instead of weeks, researchers generate better hypotheses sooner. When simulations run at cloud scale in parallel, results arrive faster. When analysis is augmented by AI, iteration cycles tighten. The entire loop accelerates.&lt;/P&gt;
&lt;H2 data-line="100"&gt;Conclusion&lt;/H2&gt;
&lt;P data-line="102"&gt;The way we approach scientific research is undergoing a fundamental shift. Large language models and the AI agents built from them are not replacing scientists, they are &lt;STRONG&gt;empowering them&lt;/STRONG&gt;&amp;nbsp;to work at a pace and scale that was previously unimaginable.&lt;/P&gt;
&lt;P data-line="104"&gt;Microsoft Discovery represents a new operating model for R&amp;amp;D. By combining advanced AI, high-performance cloud computing, and intelligent workflow orchestration, it enables researchers to offload the repetitive, time-consuming operational work to agents and invest their expertise where it has the greatest impact: in asking better questions, interpreting complex results, and pushing the boundaries of what we know.&lt;/P&gt;
&lt;P data-line="106"&gt;In the use case I have shared here, a team of six AI agents automated a simulation pipeline that would have taken a single researcher days of manual work. They prepared molecular input files, scaled out thousands of quantum mechanical simulations in parallel across the cloud, processed the results, predicted redox potentials using machine learning, and delivered a structured summary all with minimal human intervention.&lt;/P&gt;
&lt;P data-line="108"&gt;This is just the beginning. As AI agents become more capable and the tools surrounding them more mature, the potential to accelerate discovery across every scientific domain is immense. Whether you are in materials science, pharmaceuticals, energy, agriculture, or any field where complex R&amp;amp;D is central to progress, Microsoft Discovery offers a platform to&amp;nbsp;&lt;STRONG&gt;do more, faster, and with greater confidence&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P data-line="110"&gt;The future of science is not about working harder. It is about working smarter with AI as your partner in discovery.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 03 May 2026 12:05:56 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-architecture-blog/how-ms-discovery-is-empowering-scientists-to-do-more/ba-p/4516670</guid>
      <dc:creator>sameeraman</dc:creator>
      <dc:date>2026-05-03T12:05:56Z</dc:date>
    </item>
    <item>
      <title>My Azure DevOps Git configuration failed to load in Azure Data Factory</title>
      <link>https://techcommunity.microsoft.com/t5/azure-data-factory/my-azure-devops-git-configuration-failed-to-load-in-azure-data/m-p/4516650#M963</link>
      <description>&lt;P&gt;I have an Azure Data Factory that have been running for years and have been hooked up with DevOps Git configuration with no real automation, its just a place to store changes. Im using it for a monthly data transformation, so im not into it on a daily basis.&lt;/P&gt;&lt;P&gt;Suddenly yesterday the Github repository failed to load and i cant figure out why.&lt;/P&gt;&lt;P&gt;I have been into the app registration to see if any secrets have expired, and i have one that expires in 25 days, but that shouldnt be a problem yet.&lt;/P&gt;&lt;P&gt;I can also open my Github project through devops and navigate to the service connections, so i still have access to the repository, i just cant get ADF to load it properly.&lt;/P&gt;&lt;P&gt;Any ideas about what to do from here?&lt;/P&gt;</description>
      <pubDate>Sun, 03 May 2026 04:25:52 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-data-factory/my-azure-devops-git-configuration-failed-to-load-in-azure-data/m-p/4516650#M963</guid>
      <dc:creator>Plaigie</dc:creator>
      <dc:date>2026-05-03T04:25:52Z</dc:date>
    </item>
    <item>
      <title>Windows 12 — Almost Ready for Its Big Reveal?</title>
      <link>https://techcommunity.microsoft.com/t5/windows-insider-program/windows-12-almost-ready-for-its-big-reveal/m-p/4516634#M43653</link>
      <description>&lt;img /&gt;&lt;P&gt;I’ve been testing the new build of the upcoming system for two days now, and it’s clear that Microsoft is gradually addressing several key areas:&lt;/P&gt;&lt;H3&gt;🎧 &lt;STRONG&gt;Audio pipeline&lt;/STRONG&gt;&lt;/H3&gt;&lt;UL&gt;&lt;LI&gt;It’s finally starting to behave much more consistently.&lt;/LI&gt;&lt;LI&gt;It no longer inflates as aggressively as in previous builds (where it often jumped to 8–13 GB).&lt;/LI&gt;&lt;LI&gt;Crackling and robotic artifacts appear less frequently and stabilize much faster.&lt;/LI&gt;&lt;LI&gt;Overall audio quality is cleaner and more natural — you no longer get the feeling that some EQ or background service is distorting the sound.&lt;/LI&gt;&lt;/UL&gt;&lt;H3&gt;⚙️ &lt;STRONG&gt;System optimization&lt;/STRONG&gt;&lt;/H3&gt;&lt;UL&gt;&lt;LI&gt;System responsiveness, transitions, memory management, and in‑game performance have all noticeably improved.&lt;/LI&gt;&lt;LI&gt;The system feels tighter, faster, and free of unnecessary stutters.&lt;/LI&gt;&lt;/UL&gt;&lt;H3&gt;🟢 &lt;STRONG&gt;Build stability&lt;/STRONG&gt;&lt;/H3&gt;&lt;UL&gt;&lt;LI&gt;The build is now in such good shape that it could realistically be released to the public.&lt;/LI&gt;&lt;LI&gt;Long‑session stability (2–3 hours and more) is excellent, with no memory leaks or CPU spikes.&lt;/LI&gt;&lt;/UL&gt;&lt;H3&gt;🎨 &lt;STRONG&gt;New UI&lt;/STRONG&gt;&lt;/H3&gt;&lt;UL&gt;&lt;LI&gt;If you’re wondering about the new UI — it’s still locked behind cloud flags.&lt;/LI&gt;&lt;LI&gt;It doesn’t activate in this build yet, even though the system already feels prepared for it.&lt;/LI&gt;&lt;/UL&gt;</description>
      <pubDate>Sun, 03 May 2026 13:16:59 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/windows-insider-program/windows-12-almost-ready-for-its-big-reveal/m-p/4516634#M43653</guid>
      <dc:creator>kikero_exe</dc:creator>
      <dc:date>2026-05-03T13:16:59Z</dc:date>
    </item>
    <item>
      <title>Modernizing Terraform Pipelines on Azure: OIDC Federation for GitHub Actions and Azure DevOps</title>
      <link>https://techcommunity.microsoft.com/t5/azure-infrastructure-blog/modernizing-terraform-pipelines-on-azure-oidc-federation-for/ba-p/4516620</link>
      <description>&lt;H3&gt;The secret nobody wants to rotate&lt;/H3&gt;
&lt;P&gt;Most Terraform-on-Azure pipelines we see still authenticate the same way they did three years ago. A long-lived ARM_CLIENT_SECRET sitting in GitHub Actions or Azure DevOps, set once, copied around, and rotated only when something breaks.&lt;/P&gt;
&lt;P&gt;It's the most ignored credential in the cloud, and statistically the most likely one to leak. A developer screenshots a variable group. A pipeline log echoes a value. A fork inherits a secret. Or the secret simply expires on a Friday evening and takes production deployments with it.&lt;/P&gt;
&lt;P&gt;Workload Identity Federation (WIF) makes this whole class of problem go away. The pipeline mints a short-lived token at runtime, exchanges it for an Azure access token via Microsoft Entra, and never touches a secret. GitHub Actions has supported it since 2021. Azure DevOps service connections went GA with WIF in February 2024. The azurerm Terraform provider has supported it since v3.7.&lt;/P&gt;
&lt;P&gt;This post walks through the pattern end-to-end, for both GitHub Actions and Azure DevOps, the way I've rolled it out across multiple customer estates.&lt;/P&gt;
&lt;H3&gt;How the exchange actually works&lt;/H3&gt;
&lt;P&gt;Before any YAML, it helps to picture what's happening:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;The CI system (GitHub or ADO) signs a short-lived JWT describing&amp;nbsp;&lt;EM&gt;exactly&lt;/EM&gt; what's running- which repo, which branch, which environment, which service connection.&lt;/LI&gt;
&lt;LI&gt;The pipeline sends that JWT to Microsoft Entra ID.&lt;/LI&gt;
&lt;LI&gt;Entra checks it against a&amp;nbsp;&lt;STRONG&gt;federated identity credential&lt;/STRONG&gt; you've configured on a managed identity or app registration. The iss, sub, and aud claims must match case-sensitively.&lt;/LI&gt;
&lt;LI&gt;If it matches, Entra returns an Azure access token valid for the duration of the job.&lt;/LI&gt;
&lt;LI&gt;Terraform uses it. The job ends. The token expires. Nothing persists.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;The token is bound to a specific subject like repo:contoso/platform:environment:prod or sc://contoso/platform/azure-prod. It can't be reused from another repo, branch, or pipeline.&lt;/P&gt;
&lt;img /&gt;
&lt;H3&gt;Recommended Architecture&lt;/H3&gt;
&lt;P&gt;A few choices that usually hold up in production:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Decision&lt;/th&gt;&lt;th&gt;Choice&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;Identity type&lt;/td&gt;&lt;td&gt;&lt;STRONG&gt;User-assigned managed identity (UAMI)&lt;/STRONG&gt;, not app registration&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Identity granularity&lt;/td&gt;&lt;td&gt;&lt;STRONG&gt;One UAMI per environment&lt;/STRONG&gt;&amp;nbsp;(not per pipeline)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Trust scope&lt;/td&gt;&lt;td&gt;Pinned to the&amp;nbsp;&lt;STRONG&gt;environment&lt;/STRONG&gt;&amp;nbsp;claim, not the branch&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;RBAC scope&lt;/td&gt;&lt;td&gt;&lt;STRONG&gt;Resource group&lt;/STRONG&gt;, not subscription&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Remote state&lt;/td&gt;&lt;td&gt;OIDC +&amp;nbsp;use_azuread_auth = true, shared key access disabled&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;Why UAMIs? They live in your subscription, don't need Application Administrator rights to manage, and follow the lifecycle of the resource group they belong to. Why one per environment? Pipeline-per-identity explodes into hundreds of identities. Environment-per-identity maps cleanly to deployment scopes.&lt;/P&gt;
&lt;H3&gt;Part 1 - GitHub Actions&lt;/H3&gt;
&lt;H4&gt;Step 1: Create the identity and federate it&lt;/H4&gt;
&lt;P&gt;Two commands &lt;STRONG&gt;per environment&lt;/STRONG&gt;. That's it.&lt;/P&gt;
&lt;LI-CODE lang="markdown"&gt;az identity create -g rg-platform-identity -n id-tf-prod -l eastus

az identity federated-credential create \
  --name github-prod \
  --identity-name id-tf-prod \
  --resource-group rg-platform-identity \
  --issuer https://token.actions.githubusercontent.com \
  --subject repo:contoso/platform:environment:prod \
  --audiences api://AzureADTokenExchange&lt;/LI-CODE&gt;
&lt;P&gt;Repeat for nonprod. No secret is created anywhere.&lt;/P&gt;
&lt;H4&gt;Step 2: Wire it up in GitHub&lt;/H4&gt;
&lt;P&gt;In repo&amp;nbsp;&lt;STRONG&gt;Settings → Environments&lt;/STRONG&gt;, create&amp;nbsp;nonprod&amp;nbsp;and&amp;nbsp;prod. On&amp;nbsp;prod, add required reviewers and a branch rule restricting deployments to&amp;nbsp;main. Then add three&amp;nbsp;&lt;STRONG&gt;environment variables&lt;/STRONG&gt; (not secrets - these aren't sensitive): AZURE_CLIENT_ID, AZURE_TENANT_ID, AZURE_SUBSCRIPTION_ID.&lt;/P&gt;
&lt;P&gt;The workflow itself stays small:&lt;/P&gt;
&lt;LI-CODE lang="yaml"&gt;permissions:
  id-token: write
  contents: read

jobs:
  apply:
    runs-on: ubuntu-latest
    environment: prod
    env:
      ARM_USE_OIDC: "true"
      ARM_CLIENT_ID: ${{ vars.AZURE_CLIENT_ID }}
      ARM_TENANT_ID: ${{ vars.AZURE_TENANT_ID }}
      ARM_SUBSCRIPTION_ID: ${{ vars.AZURE_SUBSCRIPTION_ID }}
    steps:
      - uses: actions/checkout@v4
      - uses: hashicorp/setup-terraform@v3
      - run: terraform init &amp;amp;&amp;amp; terraform apply -auto-approve&lt;/LI-CODE&gt;
&lt;P&gt;Three things make this secure:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;id-token: write&amp;nbsp;is the only elevated permission, and it doesn't grant write access to anything&amp;nbsp;&lt;EM&gt;in GitHub,&lt;/EM&gt;&amp;nbsp;it just lets the runner mint a JWT.&lt;/LI&gt;
&lt;LI&gt;The&amp;nbsp;environment:&amp;nbsp;line picks the right&amp;nbsp;AZURE_CLIENT_ID&amp;nbsp;&lt;EM&gt;and&lt;/EM&gt;&amp;nbsp;drives the&amp;nbsp;sub&amp;nbsp;claim. The federation refuses anything else.&lt;/LI&gt;
&lt;LI&gt;No azure/login step is needed for Terraform. The azurerm provider reads GitHub's OIDC environment variables automatically.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Part 2 - Azure DevOps&lt;/H3&gt;
&lt;P&gt;The model is identical. The mechanics are different.&lt;/P&gt;
&lt;P&gt;ADO offers two creation paths for a WIF service connection:&amp;nbsp;&lt;STRONG&gt;automatic&lt;/STRONG&gt;&amp;nbsp;(it creates an app registration for you) and&amp;nbsp;&lt;STRONG&gt;manual&lt;/STRONG&gt; (you bring your own UAMI). For platform teams, manual + UAMI is almost always the better choice to ensure identity lives where governance lives.&lt;/P&gt;
&lt;P&gt;The flow is a small dance between the two portals:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;In Azure DevOps, create a new ARM service connection → choose&amp;nbsp;&lt;STRONG&gt;Workload Identity Federation (manual)&lt;/STRONG&gt;&amp;nbsp;→ fill in your UAMI's client ID, tenant ID, and subscription. Save&amp;nbsp;&lt;STRONG&gt;as draft&lt;/STRONG&gt;. ADO shows you an issuer URL and a subject identifier.&lt;/LI&gt;
&lt;LI&gt;In Azure, on the UAMI, add a federated credential using the values ADO showed you. The subject looks like&amp;nbsp;sc://contoso/platform/azure-prod.&lt;/LI&gt;
&lt;LI&gt;Back in ADO, click&amp;nbsp;&lt;STRONG&gt;Verify and save&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;In the pipeline, the service connection only "activates" if a task in the job loads it. The simplest way is the AzureCLI@2 task:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;- task: AzureCLI@2
  inputs:
    azureSubscription: azure-prod   # the WIF service connection
    scriptType: bash
    scriptLocation: inlineScript
    inlineScript: |
      terraform init &amp;amp;&amp;amp; terraform apply -auto-approve
  env:
    ARM_USE_OIDC: "true"
    ARM_CLIENT_ID: $(AZURE_CLIENT_ID)
    ARM_TENANT_ID: $(AZURE_TENANT_ID)
    ARM_SUBSCRIPTION_ID: $(AZURE_SUBSCRIPTION_ID)
    ARM_ADO_PIPELINE_SERVICE_CONNECTION_ID: $(SERVICE_CONNECTION_ID)
    SYSTEM_ACCESSTOKEN: $(System.AccessToken)
    SYSTEM_OIDCREQUESTURI: $(System.OidcRequestUri)&lt;/LI-CODE&gt;
&lt;P&gt;For teams converting dozens of legacy connections, the Azure DevOps team published a&amp;nbsp;&lt;A class="lia-external-url" href="https://devblogs.microsoft.com/devops/workload-identity-federation-for-azure-deployments-is-now-generally-available/" target="_blank" rel="noopener" data-href="https://devblogs.microsoft.com/devops/workload-identity-federation-for-azure-deployments-is-now-generally-available/"&gt;PowerShell helper&lt;/A&gt;&amp;nbsp;that walks every ARM service connection in a project and converts them in place. There's a 7-day rollback window on each connection, which makes the migration genuinely low-risk.&lt;/P&gt;
&lt;img /&gt;
&lt;H3&gt;Don't forget the state file&lt;/H3&gt;
&lt;P&gt;The Terraform state is your real blast radius. With OIDC, it's almost free to lock it down too. The same UAMI can read and write blob data without the storage account key:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;backend "azurerm" {
  resource_group_name  = "rg-tfstate"
  storage_account_name = "sttfstateprodeastus"
  container_name       = "platform-prod"
  key                  = "platform.tfstate"
  use_oidc             = true
  use_azuread_auth     = true
}&lt;/LI-CODE&gt;
&lt;P&gt;Grant the UAMI&amp;nbsp;Storage Blob Data Contributor&amp;nbsp;on the&amp;nbsp;&lt;STRONG&gt;container&lt;/STRONG&gt;&amp;nbsp;(not the account), disable shared key access on the storage account, and you've removed the last secret in the pipeline.&lt;/P&gt;
&lt;H3&gt;RBAC and break-glass&lt;/H3&gt;
&lt;P&gt;Federation removes a credential, not a privilege. A few habits worth keeping:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Scope role assignments to resource groups&lt;/STRONG&gt;, not subscriptions. The whole point of federation is that scoping is now trivially easy.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Use&amp;nbsp;Role Based Access Control Administrator&lt;/STRONG&gt;&amp;nbsp;instead of&amp;nbsp;User Access Administrator&amp;nbsp;if your Terraform creates role assignments. It's a more recent, narrower role.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Have a documented break-glass.&lt;/STRONG&gt; If GitHub or ADO has a token-service incident, you still need a path to ship a hotfix. A single hardware-key-protected emergency app registration in a separate identity boundary works well, audited monthly.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Monitor sign-ins.&lt;/STRONG&gt; Every federated exchange shows up in Entra sign-in logs as a service principal sign-in. Pipe these to Sentinel and alert on anomalies like sign-ins outside expected hours, or from IPs outside GitHub's published ranges.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;The errors you will hit (and what they really mean)&lt;/H3&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Symptom&lt;/th&gt;&lt;th&gt;What it actually is&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;AADSTS70021: No matching federated identity record found&lt;/td&gt;&lt;td&gt;Case-sensitive mismatch in&amp;nbsp;iss,&amp;nbsp;sub, or&amp;nbsp;aud. Almost always a trailing slash or a capitalised character&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;AADSTS700016: Application not found in directory&lt;/td&gt;&lt;td&gt;Wrong client ID or tenant. Not a federation problem&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;403 on a resource even though token exchange worked&lt;/td&gt;&lt;td&gt;Federation is fine. Your RBAC isn't. Check the exact scope&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Unable to determine OIDC token&amp;nbsp;(ADO)&lt;/td&gt;&lt;td&gt;No task in the job loaded the service connection. Add an&amp;nbsp;AzureCLI@2&amp;nbsp;step&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Works on&amp;nbsp;main, fails on tags&lt;/td&gt;&lt;td&gt;You pinned&amp;nbsp;sub&amp;nbsp;to a branch ref. Add a second federated credential for tags, or move to environment-based scoping&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H3&gt;Migrating without a maintenance window&lt;/H3&gt;
&lt;P&gt;You almost never get to do this on a greenfield repo. The order that has worked for me on legacy estates:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Create the new UAMI alongside the old service principal, with the same role assignments.&lt;/LI&gt;
&lt;LI&gt;Federate one canary pipeline. Verify it deploys equivalently.&lt;/LI&gt;
&lt;LI&gt;Cut over pipelines in waves, lowest-risk environment first.&lt;/LI&gt;
&lt;LI&gt;Once a full release cycle passes cleanly, disable the old SP's secret.&lt;/LI&gt;
&lt;LI&gt;Wait another cycle. Then delete the SP entirely.&lt;/LI&gt;
&lt;LI&gt;Add a CI gate that fails any new pipeline introducing ARM_CLIENT_SECRET.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;The old and new auth methods coexist on the same subscription throughout. There's no hard cutover and no maintenance window, just a steady drift toward zero secrets.&lt;/P&gt;
&lt;H3&gt;Wrapping up&lt;/H3&gt;
&lt;P&gt;If you do nothing else after reading this, do one thing: search your CI variable groups for ARM_CLIENT_SECRET. Every result is an outage or a breach waiting to happen.&lt;/P&gt;
&lt;P&gt;Federation is one of those rare changes that's both more secure&amp;nbsp;&lt;EM&gt;and&lt;/EM&gt;&amp;nbsp;less work to operate. Once you've set it up, you stop thinking about credential rotation, secret expiry, and quarterly access reviews for service principals. The pipeline simply runs, and the audit trail is in Entra where it belongs.&lt;/P&gt;
&lt;P&gt;That's a good trade.&lt;/P&gt;</description>
      <pubDate>Sat, 02 May 2026 20:34:19 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-infrastructure-blog/modernizing-terraform-pipelines-on-azure-oidc-federation-for/ba-p/4516620</guid>
      <dc:creator>ssinghkalra</dc:creator>
      <dc:date>2026-05-02T20:34:19Z</dc:date>
    </item>
    <item>
      <title>Automating Azure Naming Standards using API and DevOps Pipelines</title>
      <link>https://techcommunity.microsoft.com/t5/azure-infrastructure-blog/automating-azure-naming-standards-using-api-and-devops-pipelines/ba-p/4516628</link>
      <description>&lt;H2&gt;Introduction&lt;/H2&gt;
&lt;P&gt;In large Azure environments, one of the most overlooked yet critical governance challenges is &lt;STRONG&gt;resource naming consistency&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;While organizations define naming standards, enforcing them at scale across multiple subscriptions, teams, and pipelines often becomes a manual and inconsistent process.&lt;/P&gt;
&lt;P&gt;In real-world projects, this leads to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Operational confusion&lt;/LI&gt;
&lt;LI&gt;Difficult resource identification&lt;/LI&gt;
&lt;LI&gt;Reduced traceability&lt;/LI&gt;
&lt;LI&gt;Governance gaps&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;To address this, we implemented an &lt;STRONG&gt;API-driven naming validation approach integrated with Azure DevOps pipelines&lt;/STRONG&gt;, ensuring every resource created follows organizational standards automatically.&lt;/P&gt;
&lt;H2&gt;The Problem: Inconsistent Naming Across Environments&lt;/H2&gt;
&lt;P&gt;In distributed teams and large-scale environments, naming issues commonly arise due to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Multiple developers creating resources independently&lt;/LI&gt;
&lt;LI&gt;Lack of centralized enforcement&lt;/LI&gt;
&lt;LI&gt;Manual validation during deployments&lt;/LI&gt;
&lt;LI&gt;No integration with CI/CD pipeline&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Example (Before Automation)&lt;/H3&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;th&gt;Resource Type&lt;/th&gt;&lt;th&gt;Example Name&lt;/th&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Resource Group&lt;/td&gt;&lt;td&gt;testRG1&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Storage Account&lt;/td&gt;&lt;td&gt;mystorage123&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;VM&lt;/td&gt;&lt;td&gt;vm-prod&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;Problems:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;No standard structure&lt;/LI&gt;
&lt;LI&gt;No environment or region context&lt;/LI&gt;
&lt;LI&gt;Hard to manage at scale&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Goal&lt;/H2&gt;
&lt;P&gt;To ensure:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;✅ Standardized naming across all resources&lt;/LI&gt;
&lt;LI&gt;✅ Automated validation during deployments&lt;/LI&gt;
&lt;LI&gt;✅ No manual intervention required&lt;/LI&gt;
&lt;LI&gt;✅ Seamless integration with DevOps workflows&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Solution Overview&lt;/H2&gt;
&lt;P&gt;We implemented a &lt;STRONG&gt;naming enforcement mechanism using:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Azure Naming Tool (or similar API-based naming service)&lt;/LI&gt;
&lt;LI&gt;Azure DevOps Pipelines&lt;/LI&gt;
&lt;LI&gt;Managed Identity for secure authentication&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Architecture Flow&lt;/H2&gt;
&lt;img&gt;&lt;SPAN style="color: rgb(112, 112, 112);" data-mce-style="color: rgb(112, 112, 112);"&gt;Automated &lt;/SPAN&gt;&lt;SPAN style="color: rgb(112, 112, 112);"&gt;Naming Validation using Naming API, Managed Identity, and DevOps Pipeline&lt;/SPAN&gt;&lt;/img&gt;
&lt;H3&gt;🔍 Solution Flow Explained&lt;/H3&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Developer Commit&lt;/STRONG&gt;&lt;BR /&gt;The process begins when a developer commits code to the repository, triggering the Azure DevOps pipeline.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure DevOps Pipeline Execution&lt;/STRONG&gt;&lt;BR /&gt;The pipeline runs deployment scripts as part of the CI/CD process.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Managed Identity Authentication&lt;/STRONG&gt;&lt;BR /&gt;The pipeline uses Managed Identity to securely authenticate and obtain an access token—eliminating the need for storing credentials.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Naming API Invocation&lt;/STRONG&gt;&lt;BR /&gt;A request is sent to the Naming API with resource details such as:
&lt;UL&gt;
&lt;LI&gt;Resource type&lt;/LI&gt;
&lt;LI&gt;Environment&lt;/LI&gt;
&lt;LI&gt;Location&lt;/LI&gt;
&lt;LI&gt;Application name&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Validation &amp;amp; Name Generation&lt;/STRONG&gt;&lt;BR /&gt;The Naming API validates inputs and returns a compliant resource name based on predefined standards.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Deployment Decision&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;If validation succeeds → resources are deployed&lt;/LI&gt;
&lt;LI&gt;If validation fails → deployment is blocked&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Resource Deployment&lt;/STRONG&gt;&lt;BR /&gt;Only validated, compliant resources are provisioned in Azure.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt;&lt;BR /&gt;The “Azure Naming API” referenced in this blog represents an implementation pattern rather than a native Azure service.&lt;BR /&gt;Solutions such as Azure Naming Tool or custom APIs can be used to expose naming logic and integrate with DevOps pipelines for automated enforcement. This approach can be implemented using solutions like Azure Naming Tool, Resource Name Generator, or custom-built APIs&lt;/P&gt;
&lt;H2&gt;Implementation Details&lt;/H2&gt;
&lt;H4&gt;Authentication using Managed Identity&lt;/H4&gt;
&lt;P&gt;To securely access the Naming API:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Managed Identity is used&lt;/LI&gt;
&lt;LI&gt;No secrets or credentials stored in pipeline&lt;/LI&gt;
&lt;LI&gt;Token retrieved dynamically&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;PowerShell Implementation&lt;/H3&gt;
&lt;P&gt;Below is a simplified version of what was used in implementation:&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;# Get access token using Managed Identity
$token = (Get-AzAccessToken -ResourceUrl "api://NamingTool").Token

# Call naming API
$response = Invoke-RestMethod `
    -Uri "https://your-namingtool-api-endpoint/api/naming" `
    -Headers @{ Authorization = "Bearer $token" } `
    -Method POST `
    -Body @{
        resourceType = "resourceGroup"
        environment  = "prod"
        location     = "eastus"
        application  = "app01"
    } | ConvertTo-Json

# Extract generated resource name
$resourceName = $response.name

Write-Output "Generated Name: $resourceName"
&lt;/LI-CODE&gt;
&lt;H2&gt;🔄 Azure DevOps Pipeline Integration&lt;/H2&gt;
&lt;P&gt;Naming validation is integrated directly into the deployment pipeline. Sample Pipeline Snippet&lt;/P&gt;
&lt;LI-CODE lang="yaml"&gt;- task: AzureCLI@2
  inputs:
    azureSubscription: 'ServiceConnection'
    scriptType: 'ps'
    scriptLocation: 'inlineScript'
    inlineScript: |
      Write-Output "Calling Naming API"
      .\scripts\Get-ResourceName.ps1&lt;/LI-CODE&gt;
&lt;H3&gt;Key Benefit:&lt;/H3&gt;
&lt;P&gt;👉 Resource names are validated &lt;STRONG&gt;before deployment&lt;/STRONG&gt;, preventing non-compliant resources from being created.&lt;/P&gt;
&lt;H2&gt;Security Considerations&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;Use Managed Identity for API authentication&lt;/LI&gt;
&lt;LI&gt;Avoid storing secrets in pipelines&lt;/LI&gt;
&lt;LI&gt;Ensure API access is restricted and secured&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Extending This Solution&lt;/H2&gt;
&lt;P&gt;This approach can be extended to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Enforcing tagging standards&lt;/LI&gt;
&lt;LI&gt;Policy validation before deployment&lt;/LI&gt;
&lt;LI&gt;Subscription vending automation&lt;/LI&gt;
&lt;LI&gt;Cost governance controls&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;✨ Final Thoughts&lt;/H2&gt;
&lt;P&gt;Naming standards are often documented—but rarely enforced effectively.&lt;/P&gt;
&lt;P&gt;By integrating API-based naming validation into DevOps pipelines, organizations can move from:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Guidelines → ✅ Automated Enforcement&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;This ensures governance is:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Scalable&lt;/LI&gt;
&lt;LI&gt;Consistent&lt;/LI&gt;
&lt;LI&gt;Developer-friendly&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Sun, 03 May 2026 05:53:03 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-infrastructure-blog/automating-azure-naming-standards-using-api-and-devops-pipelines/ba-p/4516628</guid>
      <dc:creator>sameenamohammed</dc:creator>
      <dc:date>2026-05-03T05:53:03Z</dc:date>
    </item>
    <item>
      <title>Reimagining Azure Governance with Automation &amp; EPAC</title>
      <link>https://techcommunity.microsoft.com/t5/azure-infrastructure-blog/reimagining-azure-governance-with-automation-epac/ba-p/4516626</link>
      <description>&lt;H3&gt;🧩 The Challenge: Governance at Scale&lt;/H3&gt;
&lt;P&gt;Managing Azure environments manually introduces:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;❌ Policy drift across subscriptions&lt;/LI&gt;
&lt;LI&gt;❌ Inconsistent naming conventions&lt;/LI&gt;
&lt;LI&gt;❌ Delays in compliance enforcement&lt;/LI&gt;
&lt;LI&gt;❌ Human errors in deployments&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;🔍 &lt;STRONG&gt;Insight:&lt;/STRONG&gt; Governance gaps are often not due to lack of policies—but lack of automation.&lt;/P&gt;
&lt;H3&gt;Solution Overview: EPAC&lt;/H3&gt;
&lt;P&gt;&lt;STRONG&gt;Enterprise Policy as Code (EPAC)&lt;/STRONG&gt; to bring governance into the DevOps workflow.&lt;/P&gt;
&lt;P&gt;EPAC helped us:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Treat policies like code&lt;/LI&gt;
&lt;LI&gt;Automate deployments&lt;/LI&gt;
&lt;LI&gt;Standardize governance&lt;/LI&gt;
&lt;LI&gt;Maintain audit history&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Architecture Overview&lt;/H3&gt;
&lt;P&gt;Our EPAC setup included:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Management Groups hierarchy&lt;/STRONG&gt; for governance scope&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Policy Definitions &amp;amp; Initiatives&lt;/STRONG&gt; (JSON/Bicep)&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure DevOps Pipeline&lt;/STRONG&gt; for deployment&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Managed Identity&lt;/STRONG&gt; for secure execution&lt;/LI&gt;
&lt;/UL&gt;
&lt;img&gt;This is sample implementation flow of EPAC in enterprises&lt;/img&gt;
&lt;H3&gt;Flow Explanation&lt;/H3&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure DevOps Pipeline&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;Triggers CI/CD process&lt;/LI&gt;
&lt;LI&gt;Executes deployment scripts&lt;/LI&gt;
&lt;LI&gt;Authenticates securely using Managed Identity&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;EPAC Framework&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;Stores policies as code&lt;/LI&gt;
&lt;LI&gt;Enables version control and validation&lt;/LI&gt;
&lt;LI&gt;Acts as the central governance engine&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure Policy Engine&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;Evaluates resources against defined policies&lt;/LI&gt;
&lt;LI&gt;Enforces compliance automatically&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Target Environments&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;Policies applied across:
&lt;UL&gt;
&lt;LI&gt;Management Groups&lt;/LI&gt;
&lt;LI&gt;Subscriptions&lt;/LI&gt;
&lt;LI&gt;Resource Groups&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;H3&gt;Why Policy-as-Code Matters&lt;/H3&gt;
&lt;P&gt;✅ Standardized governance across environments&lt;BR /&gt;✅ Faster onboarding of subscriptions&lt;BR /&gt;✅ Improved audit readiness&lt;BR /&gt;✅ Repeatable and reliable policy deployment&lt;BR /&gt;✅ Seamless DevOps integration&lt;/P&gt;
&lt;H3&gt;Security &amp;amp; Compliance Benefits&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Enforces &lt;STRONG&gt;least privilege access&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Prevents &lt;STRONG&gt;misconfigured deployments&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Supports &lt;STRONG&gt;continuous compliance&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Aligns with standards like &lt;STRONG&gt;ISO 27001&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Implementation Walkthrough&lt;/H3&gt;
&lt;H4&gt;✅ Step 1: Structuring Policy Repository&lt;/H4&gt;
&lt;img /&gt;
&lt;P&gt;This structure ensured:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Clear separation of concerns&lt;/LI&gt;
&lt;LI&gt;Reusability&lt;/LI&gt;
&lt;LI&gt;Easy onboarding for new contributors&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4&gt;✅ Step 2: Defining Policies&lt;/H4&gt;
&lt;P&gt;We created custom policies and reused built-in ones.&lt;/P&gt;
&lt;P&gt;Example: Restrict allowed regions&lt;/P&gt;
&lt;P&gt;{&lt;/P&gt;
&lt;P&gt;&amp;nbsp; "if": {&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; "field": "location",&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; "notIn": ["eastus", "westeurope"]&lt;/P&gt;
&lt;P&gt;&amp;nbsp; },&lt;/P&gt;
&lt;P&gt;&amp;nbsp; "then": {&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; "effect": "deny"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; }&lt;/P&gt;
&lt;P&gt;}&lt;/P&gt;
&lt;H4&gt;✅ Step 3: Creating Initiatives&lt;/H4&gt;
&lt;P&gt;Instead of assigning individual policies, we grouped them into &lt;STRONG&gt;initiatives&lt;/STRONG&gt;:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Security baseline&lt;/LI&gt;
&lt;LI&gt;Tagging compliance&lt;/LI&gt;
&lt;LI&gt;Cost optimization&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;👉 This reduced duplication and simplified assignments.&lt;/P&gt;
&lt;H4&gt;✅ Step 4: Pipeline Automation&lt;/H4&gt;
&lt;P&gt;We built an Azure DevOps pipeline to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Validate policy templates&lt;/LI&gt;
&lt;LI&gt;Deploy definitions to management groups&lt;/LI&gt;
&lt;LI&gt;Assign initiatives automatically&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Example pipeline flow:&lt;/P&gt;
&lt;P&gt;This is example pipeline structure for deploying epac policies targeted to single environment&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="yaml"&gt;parameters:
  - name: forceDeployment
    displayName: 'Force deployment (ignore change detection)'
    type: boolean
    default: false
  - name: clearAgentCache
    displayName: 'Clear agent container cache (recommended for troubleshooting)'
    type: boolean
    default: true

variables:
  # Pipeline per il deployment delle Policy in ambiente Canary/Test
  PAC_OUTPUT_FOLDER: ./Output
  PAC_DEFINITIONS_FOLDER: ./Definitions

  # Service connection per l'ambiente di test/canary
  serviceConnection: "SC-EPAC-CONTRIBUTOR-TST-001"

  # Environment selector per canary
  pacEnvironmentSelector: canary

# Trigger: deploy solo manualmente o da branch specifici
trigger: none

# PR trigger per validazione
pr:
  branches:
    include:
    - main
    - feature/*
  paths:
    include:
    - src/IaC/Infrastructure/epac/Definitions/*

pool:
  name: "TST-AgentPool-01"

stages:
  - stage: Plan
    displayName: "Plan Canary Environment"
    jobs:
      - job: Plan
        displayName: "Generate Deployment Plan"
        steps:
          - template: templates/plan.yml
            parameters:
              serviceConnection: $(serviceConnection)
              pacEnvironmentSelector: ${{ variables.pacEnvironmentSelector }}

  - stage: Deploy
    displayName: "Deploy to Canary (Audit Mode)"
    dependsOn: Plan
    condition: and(not(failed()), not(canceled()), or(eq('${{ parameters.forceDeployment }}', 'true'), and(eq('${{ parameters.forceDeployment }}', 'false'), or(eq(dependencies.Plan.outputs['Plan.Plan.deployPolicyChanges'], 'yes'), eq(dependencies.Plan.outputs['Plan.Plan.deployRoleChanges'], 'yes')))))
    variables:
      PAC_INPUT_FOLDER: "$(Pipeline.Workspace)/plans-${{ variables.pacEnvironmentSelector }}"
      localDeployPolicyChanges: $[stageDependencies.Plan.Plan.outputs['Plan.deployPolicyChanges']]
      localDeployRoleChanges: $[stageDependencies.Plan.Plan.outputs['Plan.deployRoleChanges']]
    jobs:
      - deployment: DeployPolicy
        displayName: "Deploy Policy Changes (Audit Mode)"
        environment: PAC-CANARY
        condition: and(not(failed()), not(canceled()), or(eq('${{ parameters.forceDeployment }}', 'true'), and(eq('${{ parameters.forceDeployment }}', 'false'), eq(variables.localDeployPolicyChanges, 'yes'))))
        strategy:
          runOnce:
            deploy:
              steps:
                - template: templates/deploy-policy.yml
                  parameters:
                    serviceConnection: $(serviceConnection)
                    pacEnvironmentSelector: ${{ variables.pacEnvironmentSelector }}
                    forceDeployment: ${{ parameters.forceDeployment }}
                
      - deployment: DeployRoles
        displayName: "Deploy Role Assignments"
        dependsOn: DeployPolicy
        environment: PAC-CANARY
        condition: and(not(failed()), not(canceled()), eq(variables.localDeployRoleChanges, 'yes'))  # Riabilitato per AMBA managed identity
        strategy:
          runOnce:
            deploy:
              steps:
                - template: templates/deploy-roles.yml
                  parameters:
                    serviceConnection: $(serviceConnection)
                    pacEnvironmentSelector: ${{ variables.pacEnvironmentSelector }}

  # Stage opzionale per validazione post-deployment
  - stage: Validate
    displayName: "Validate Canary Deployment"
    dependsOn: Deploy
    condition: and(succeeded(), ne(variables['Build.Reason'], 'PullRequest'))
    jobs:
      - job: ValidateCompliance
        displayName: "Validate Policy Compliance"
        steps:
          - task: PowerShell@2
            displayName: "Check Policy Compliance Status"
            inputs:
              targetType: 'inline'
              script: |
                Write-Host "##[section]Canary deployment completed successfully"
                Write-Host "##[warning]Remember: All policies are in AUDIT mode - monitor compliance dashboard"
                Write-Host "##[task.complete result=Succeeded;]Canary validation completed"&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;✅ Step 5: Secure Deployment using Managed Identity&lt;/H4&gt;
&lt;P&gt;We used:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;System-assigned Managed Identity&lt;/LI&gt;
&lt;LI&gt;RBAC roles (Policy Contributor / Reader)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;✅ Benefit:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;No secrets in pipeline&lt;/LI&gt;
&lt;LI&gt;Improved security posture&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4&gt;✅ Step 6: Policy Assignment at Scale&lt;/H4&gt;
&lt;P&gt;Policies were assigned at:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Root Management Group&lt;/LI&gt;
&lt;LI&gt;Subscription level (when needed)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This ensured:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Consistent enforcement&lt;/LI&gt;
&lt;LI&gt;Centralized control&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Real Use Cases Implemented&lt;/H3&gt;
&lt;P&gt;Using EPAC, we solved real scenarios:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;🔹 Enforcing &lt;STRONG&gt;naming conventions&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;🔹 Ensuring mandatory &lt;STRONG&gt;resource tagging&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;🔹 Restricting deployment regions&lt;/LI&gt;
&lt;LI&gt;🔹 Enforcing &lt;STRONG&gt;backup policies on disks&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;🔹 Preventing creation of non-compliant resources&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Best Practices&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Start with baseline policies first&lt;/LI&gt;
&lt;LI&gt;Use initiatives instead of individual assignments&lt;/LI&gt;
&lt;LI&gt;Enable PR-based approvals&lt;/LI&gt;
&lt;LI&gt;Always test policies in lower environments&lt;/LI&gt;
&lt;LI&gt;Maintain clear documentation&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Conclusion&lt;/H3&gt;
&lt;P&gt;Implementing EPAC transformed our governance model from &lt;STRONG&gt;manual and reactive → automated and proactive&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;For teams managing complex Azure environments, EPAC provides:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Scalability&lt;/LI&gt;
&lt;LI&gt;Consistency&lt;/LI&gt;
&lt;LI&gt;Security&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;If you are still managing policies manually, this is the right time to: 👉 Move to &lt;STRONG&gt;Policy as Code&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 02 May 2026 19:26:11 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-infrastructure-blog/reimagining-azure-governance-with-automation-epac/ba-p/4516626</guid>
      <dc:creator>sameenamohammed</dc:creator>
      <dc:date>2026-05-02T19:26:11Z</dc:date>
    </item>
    <item>
      <title>Mastering Task View in Windows 11: A Smarter Way to Work and Stay Organized</title>
      <link>https://techcommunity.microsoft.com/t5/windows-11/mastering-task-view-in-windows-11-a-smarter-way-to-work-and-stay/m-p/4516574#M39898</link>
      <description>&lt;P data-slot-rendered-content="true"&gt;If you’ve ever found yourself juggling multiple windows, tabs, and apps on your computer, you’re not alone. Modern workflows demand multitasking, but without the right tools, things can quickly become chaotic. That’s where&amp;nbsp;&lt;STRONG&gt;Task View in Windows 11&lt;/STRONG&gt; comes in a powerful yet often underused feature that can completely transform how you work.&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://dellenny.com/mastering-task-view-in-windows-11-a-smarter-way-to-work-and-stay-organized/" target="_blank"&gt;https://dellenny.com/mastering-task-view-in-windows-11-a-smarter-way-to-work-and-stay-organized/&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 02 May 2026 09:18:17 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/windows-11/mastering-task-view-in-windows-11-a-smarter-way-to-work-and-stay/m-p/4516574#M39898</guid>
      <dc:creator>JohnNaguib</dc:creator>
      <dc:date>2026-05-02T09:18:17Z</dc:date>
    </item>
    <item>
      <title>GitHub Copilot in the Classroom: Help or Hindrance?</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-365-copilot/github-copilot-in-the-classroom-help-or-hindrance/m-p/4516573#M6411</link>
      <description>&lt;P data-slot-rendered-content="true"&gt;The rise of AI in education has sparked a wave of excitement and concern. Among the most talked-about tools is GitHub Copilot, an AI-powered coding assistant that can generate code in real time. For students and educators alike, it raises an important question: is Copilot a helpful learning companion, or does it risk becoming a shortcut that undermines real understanding?&lt;/P&gt;
&lt;P data-slot-rendered-content="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://dellenny.com/github-copilot-in-the-classroom-help-or-hindrance/" target="_blank"&gt;https://dellenny.com/github-copilot-in-the-classroom-help-or-hindrance/&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 02 May 2026 09:13:37 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-365-copilot/github-copilot-in-the-classroom-help-or-hindrance/m-p/4516573#M6411</guid>
      <dc:creator>JohnNaguib</dc:creator>
      <dc:date>2026-05-02T09:13:37Z</dc:date>
    </item>
    <item>
      <title>TrustedMissingIdentityClaimSource - OIDC</title>
      <link>https://techcommunity.microsoft.com/t5/sharepoint/trustedmissingidentityclaimsource-oidc/m-p/4516560#M88845</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;Good day!&lt;BR /&gt;I'm setting up OIDC connection thru SharePoint subscription edition referring to the below link&lt;BR /&gt;https://learn.microsoft.com/en-us/SharePoint/security-for-sharepoint-server/set-up-oidc-auth-in-sharepoint-server-with-msaad&lt;/P&gt;&lt;P&gt;I was able to get in thru Entra (that means OIDC connection works) but sharepoint return me this exception when getting in to site collection.&lt;BR /&gt;&lt;BR /&gt;_layouts/15/_login/default.aspx?errorCode=TrustedMissingIdentityClaimSource=https%3A%2F%2FSPdemo.local%2F_layouts%2F15%2FAuthenticate.aspx%3FSource%3D%252F&lt;BR /&gt;&lt;BR /&gt;Workaround suggested done like reconfigured SPTrust, certificates and SharePoint Web application multiple times but yet still no avail. Unfortunately, this have been greyed out for me as i cannot find a concrete resolution.&lt;BR /&gt;&lt;BR /&gt;Is anyone have experience the same exception, or perhaps share thoughts what are missing here.&lt;BR /&gt;&lt;BR /&gt;Thank you!&lt;/P&gt;&lt;img /&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 02 May 2026 06:18:56 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/sharepoint/trustedmissingidentityclaimsource-oidc/m-p/4516560#M88845</guid>
      <dc:creator>hunk0227</dc:creator>
      <dc:date>2026-05-02T06:18:56Z</dc:date>
    </item>
    <item>
      <title>Talk to Your Data: Fabric Data Agents for D365, Power Platform &amp; Beyond</title>
      <link>https://techcommunity.microsoft.com/t5/events/talk-to-your-data-fabric-data-agents-for-d365-power-platform/ec-p/4516558#M48</link>
      <description>&lt;P&gt;📢 &lt;STRONG&gt;Upcoming Session at The Fabric Café&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;🗓 &lt;STRONG&gt;May 13th at 6:00 PM PT&lt;/STRONG&gt;&lt;BR /&gt;🎙 &lt;STRONG&gt;Speakers:&lt;/STRONG&gt; Gaston Cruz &amp;amp; Alex Rostan&lt;BR /&gt;🎤 &lt;STRONG&gt;Host:&lt;/STRONG&gt; Mehrdad Abdollahi&lt;BR /&gt;📌 &lt;STRONG&gt;Title:&lt;/STRONG&gt; &lt;EM&gt;Talk to Your Data: Fabric Data Agents for D365, Power Platform &amp;amp; Beyond&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;What if your business users could simply &lt;STRONG&gt;ask questions and get answers directly from enterprise data?&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;In this session, you’ll learn how to connect &lt;STRONG&gt;Dynamics 365 and Power Platform&lt;/STRONG&gt; data into &lt;STRONG&gt;Microsoft Fabric&lt;/STRONG&gt;, and build &lt;STRONG&gt;Fabric Data Agents&lt;/STRONG&gt; that enable natural language interaction with your data.&lt;/P&gt;&lt;P&gt;🚀 &lt;STRONG&gt;What you’ll learn:&lt;/STRONG&gt;&lt;BR /&gt;✔ Ingest and model D365 &amp;amp; Power Platform data into a Lakehouse &amp;amp; Semantic Model&lt;BR /&gt;✔ Build a Fabric Data Agent that understands business context&lt;BR /&gt;✔ Expose your agent via &lt;STRONG&gt;Copilot Studio&lt;/STRONG&gt; and embed it in &lt;STRONG&gt;Microsoft Teams&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;👥 &lt;STRONG&gt;Who should attend?&lt;/STRONG&gt;&lt;BR /&gt;Data architects, Power Platform makers, D365 owners, and business leaders driving AI-powered transformation.&lt;/P&gt;&lt;P&gt;💡 &lt;STRONG&gt;Tip:&lt;/STRONG&gt; Bring a key KPI or report your team uses — and learn how to turn it into a conversation-ready data experience!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;#TheFabricCafe #MicrosoftFabric #Microsoft #MicrosoftLearn #MicrosoftFabricCommunity #AI #PowerPlatform #Dynamics365 #DataEngineering #DataAnalytics #Copilot #DataCommunity&lt;/P&gt;</description>
      <pubDate>Sat, 02 May 2026 06:05:36 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/events/talk-to-your-data-fabric-data-agents-for-d365-power-platform/ec-p/4516558#M48</guid>
      <dc:creator>MehrdadAbdollahi</dc:creator>
      <dc:date>2026-05-02T06:05:36Z</dc:date>
    </item>
    <item>
      <title>AI Unleashed: The Foundry and studio shaping tomorrow's creativity</title>
      <link>https://techcommunity.microsoft.com/t5/events/ai-unleashed-the-foundry-and-studio-shaping-tomorrow-s/ec-p/4516548#M37</link>
      <description>&lt;P&gt;🚀 AI Unleashed: The Foundry &amp;amp; Studios Shaping Tomorrow’s Creativity&lt;BR /&gt;&lt;BR /&gt;What if creativity wasn’t limited by time, resources, or technical complexity?&lt;BR /&gt;&lt;BR /&gt;Join this upcoming session where we explore how AI Foundries and modern AI studios are transforming the way businesses innovate, build, and deliver exceptional customer experiences.&lt;BR /&gt;&lt;BR /&gt;From virtual try-on experiences in retail to intelligent chatbots that enhance customer support, AI-powered platforms are unlocking new possibilities:&lt;BR /&gt;✨ Accelerate innovation without heavy development overhead&lt;BR /&gt;✨ Create immersive, personalized customer journeys&lt;BR /&gt;✨ Automate processes while improving efficiency&lt;BR /&gt;✨ Turn ideas into scalable solutions—faster than ever&lt;BR /&gt;&lt;BR /&gt;Whether you're a developer, business leader or tech enthusiast, this session will give you practical insights into how AI platforms are reshaping creativity and digital experiences.&lt;BR /&gt;&lt;BR /&gt;🎯 Don’t miss this opportunity to see how AI is redefining what’s possible.&lt;BR /&gt;&lt;BR /&gt;📺 Register and join live → &lt;A class="lia-external-url" href="https://streamyard.com/watch/M8DnX9eRa8yU?wt.mc_id=MVP_455449" target="_blank" rel="noopener"&gt;StreamYard Session&lt;/A&gt;&lt;/P&gt;&lt;P&gt;🗓️ Date: 8 May 2026&lt;BR /&gt;⏰ Time: 18:00 (AEST) / 16:00 (SGT) / 10:00 (CEST)&lt;BR /&gt;🎙️ Speaker: &lt;A href="https://www.linkedin.com/in/altfo/" target="_blank" rel="noopener"&gt;Senthamil Selvan&lt;/A&gt;&lt;BR /&gt;📌 Topic: AI Unleashed: The Foundry and studio shaping tomorrow's creativity&lt;/P&gt;&lt;img /&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 02 May 2026 03:40:35 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/events/ai-unleashed-the-foundry-and-studio-shaping-tomorrow-s/ec-p/4516548#M37</guid>
      <dc:creator>Pouya</dc:creator>
      <dc:date>2026-05-02T03:40:35Z</dc:date>
    </item>
    <item>
      <title>Enabling AI‑Driven Workforce Transformation with People Skills and Workforce Insights Agent: April 2026 M365 Champions Community call</title>
      <link>https://techcommunity.microsoft.com/t5/driving-adoption-blog/enabling-ai-driven-workforce-transformation-with-people-skills/ba-p/4516533</link>
      <description>&lt;div data-video-id="https://www.youtube.com/watch?v=oPljN-0D7j8/1777673779639" data-video-remote-vid="https://www.youtube.com/watch?v=oPljN-0D7j8/1777673779639" class="lia-video-container lia-media-is-center lia-media-size-large"&gt;&lt;iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FoPljN-0D7j8%3Ffeature%3Doembed&amp;amp;display_name=YouTube&amp;amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DoPljN-0D7j8&amp;amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FoPljN-0D7j8%2Fhqdefault.jpg&amp;amp;type=text%2Fhtml&amp;amp;schema=youtube" allowfullscreen="" style="max-width: 100%"&gt;&lt;/iframe&gt;&lt;/div&gt;
&lt;H3&gt;Hello Champions!&lt;/H3&gt;
&lt;P&gt;Here’s a recap and top Q+A from our April M365 Champions monthly call of 2026, featuring: Enabling AI‑Driven Workforce Transformation with People Skills and Workforce Insights Agent with&amp;nbsp; &lt;A class="lia-external-url" href="https://www.linkedin.com/in/anirudhbajaj/" target="_blank" rel="noopener"&gt;Anirudh Bajaj&lt;/A&gt; ,Microsoft Senior Product Manager and&amp;nbsp;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/huiyingjennyh/" target="_blank" rel="noopener"&gt;Jenny Huang&lt;/A&gt;, Microsoft Senior Customer Experience Program manager.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We kicked off the call with the annoucement of our digital premiere of More Than Code: SharePoint community film. More Than Code is a documentary-style short film featuring voices from MVPs, customers, and Microsoft leaders. Discover how connection, creativity, and collaboration helped turn a product into a global movement. Click &lt;A class="lia-external-url" href="https://aka.ms/SPat25/MoreThanCode" target="_blank" rel="noopener"&gt;here&lt;/A&gt; to watch the digital premiere today!&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Anirudh Bajaj and Jenny Huang led the main topic this month with a deep dive into People Skills and Workforce Insights Agent. First, Anirudh highlighted a shift in how organizations must approach AI—moving beyond simply using AI tools to actively redesigning workflows around them. He introduced &lt;A class="lia-external-url" href="https://adoption.microsoft.com/ai-agents/people-skills/" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;People Skills&lt;/STRONG&gt;&lt;/A&gt; as a foundational capability to enable this transformation by providing visibility into workforce skills through AI-powered inferencing. By analyzing Microsoft 365 activity and combining it with a large pre-built skills taxonomy, the solution generates dynamic skill profiles for employees that can be validated or customized by users. These skills power scenarios such as finding experts, improving collaboration, and enabling personalized learning.&amp;nbsp; Anirudh emphasized privacy and user control, noting that employees can manage their profiles, opt out of inferencing, and control sharing, while admins can configure governance policies.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Next, Anirudh introduced his colleague Jenny Huang who introduced us to the &lt;STRONG&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/microsoft-365/copilot/workforce-insights-agent" target="_blank" rel="noopener"&gt;Workforce Insights Agent&lt;/A&gt;.&lt;/STRONG&gt; The Workforce Insights Agent is designed to help leaders translate skills and organizational data into actionable decisions. She framed the agent within the growing need for strategic workforce planning in an AI-driven environment, where leaders must quickly understand team structure, skills distribution, and emerging gaps. The agent integrates data from Microsoft 365 profiles and external HR systems, enabling leaders to query organizational data conversationally and receive real-time insights, visualizations, and recommendations. Jenny outlined key use cases such as analyzing team composition, identifying hiring or skill gaps, and guiding talent movement and upskilling strategies. Currently available in the Microsoft &lt;A class="lia-external-url" href="https://learn.microsoft.com/microsoft-365/admin/manage/get-started-frontier?view=o365-worldwide" target="_blank" rel="noopener"&gt;Frontier program,&lt;/A&gt; the agent emphasizes governed data access and encourages customer feedback to shape its evolution toward general availability, positioning it as a critical decision-support tool for AI-driven organizational transformation.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Q+A from this month's session:&lt;/H4&gt;
&lt;P&gt;&lt;EM&gt;1. When will the Workforce Insights Agent be generally available (GA)?&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Answer: There is no confirmed GA timeline yet. The team is actively relying on customer feedback from the Frontier program to determine readiness and scope before moving to GA.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;2. Can we add a two layer skill approval? so we are getting confirmation person is actually skilled in "skill"? and then add levels (1 - beginner, 5 - expert (teaching level professional)?&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Answer: We have not added expertise levels to the People Skills as an "expert" is very subjective and defined differently by organizations, and organizations have different level requirements (1-to-5 vs 1-to-10 vs categories (expert, beginner etc.) so it is challenging to standardize this. We are exploring offering skill depth/context in certain contexts e.g. while matching a person to an open role- to understand their skill depth. More to come on this.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;3. People Skill AI - What is the distinction between this and Workforce Insights?&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Answer: People Skills AI inferencing uses intelligence to make skill recommendation (AI inferred skills) . This is a data layer that is used by Copilot and Workforce Insights Agent to help you with decision making.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;4. Can you help me understand what Agents explicitly we are talking about? There used to be a 'Skills' Agent, which I can no longer see in my environment (Frontier). I also cant see a 'People' Agent. I do however see a 'workforce insights' agent.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Answer: Skills agent and People agent are both retired in March and the capabilities are now folded into M365 Copilot chat and Workforce Insights agent.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;5. Can these People skills be fed into a program like Workday, so they can be added to the Skills Cloud and be used for promotions, reviews, etc.?&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Answer: Confirmed skills in People Skills can be exported for you to use it in other platforms/systems. Due to privacy reason, we are not allowed to provide export on inferred skills.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;6. Is Viva Engage included in E5 licensing?&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Answer: Yes! The core/standard experience is included the E5 license, which does include communities and integration in Teams.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;7. Are these calls recorded and/or archived so we can review later?&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Answer: Yes, the recording will be published on the &lt;A href="https://aka.ms/drivingadoption" target="_blank" rel="noopener"&gt;Driving Adoption community&lt;/A&gt; and our &lt;A href="https://aka.ms/Community/Learning" target="_blank" rel="noopener"&gt;YouTube Community Learning&lt;/A&gt; channel. However, only Champion program members have access to the presentation (see the link in your welcome email or the latest newsletter). If you're not a member yet, join here: &lt;A href="https://aka.ms/BecomeAChampion" target="_blank" rel="noopener"&gt;https://aka.ms/BecomeAChampion&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;8. Is frontier a paid programme?&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Answer: It's not an additional to your M365 Copilot license, but you do need to opt into the program. More info here:&amp;nbsp;&lt;A href="https://www.microsoft.com/en-us/microsoft-365-copilot/frontier-program" target="_blank" rel="noopener"&gt;Explore AI Early Access in Microsoft 365 | Microsoft Frontier&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;9. Is the People Skills Agent and Learning Agent available to Copilot Chat Basic users or only the paid version of Copilot?&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Answer: People Skills agent was deprecated in March and the capability are now available through M365 Copilot chat and Workforce Insights agent for M365 Copilot licensed users.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;10. Can you see aggregated information from People Skills? Like see what level people are at with AI across the enterprise?&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Answer: There are 2 aspects here: one is skills aggregation report and another one is skills level/proficiency. The first part can be done via skills landscape report (via Viva Insights, shown as a Power BI report), or through Workforce Insights agent which I will share more later Ani's presentation. On skills level/proficiency piece, it's not available today in People Skills.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;11. Can you also add linkedin profile to the sources for People Skills inference?&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Answer: Not directly the profile but if you have an inventory of what skills are attached/assigned to an user, you can import the list into People Skills as 3rd party skills.&lt;/P&gt;
&lt;P&gt;12. &lt;EM&gt;On an enterprise level can an organisation navigate a recommended career path for an employee based on their people skills? How does this data show up?&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Answer: Today Copilot can make general career and learning recommendations for users based on their skills. However, People Skills does not yet support career path suggestions based on job architecture/open job roles.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;13. Are these features only managed by the Admins? We have a single tenant so our Agency doesn't have access to the Admin console. Many of these Admin features are not usable for us without a more granular way to manage them.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Answer: The admin controls can be set up by someone with the right permissions, these roles can do it today:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;AI administrator&lt;/LI&gt;
&lt;LI&gt;Knowledge administrator&lt;/LI&gt;
&lt;LI&gt;Global admin&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;EM&gt;14. We are a multi-cloud environment (commercial and GCCH). Will these agents eventually be available on GCCH?&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Answer: People Skills is currently only available in WW environment, we don't have an estimate on when we will be in GCC.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;15. Am I understanding correctly that the PAX API allows access to our tenant's data by a third party? What data controls guide that interaction?&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Answer: You can bring in 3P data via connectors or push APIs into MODIS and apply access control there.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;16. Who can access the Workforce Insights Agent?&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Answer: Any user with a &lt;STRONG&gt;Microsoft 365 Copilot license&lt;/STRONG&gt; can access the agent. Currently, only through the &lt;A href="https://learn.microsoft.com/microsoft-365/admin/manage/get-started-frontier?view=o365-worldwide" target="_blank" rel="noopener"&gt;Frontier program&lt;/A&gt;.&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The tenant must also meet requirements (e.g., sufficient Copilot licenses in the environment).&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;EM&gt;17. Where does Workforce Insights Agent get its data from?&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Answer: Default data comes from Microsoft 365 profiles (Entra). Additional data can be brought in via:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;HR systems (Workday, SuccessFactors)&lt;/LI&gt;
&lt;LI&gt;APIs or CSV uploads&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Please note: Sensitive data is governed using&amp;nbsp;strict data policies and access controls.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;18. Do users have control over their skills and data?&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Answer: Users are able to:&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Confirm, edit, or remove inferred skills&lt;/LI&gt;
&lt;LI&gt;Add their own skills manually&lt;/LI&gt;
&lt;LI&gt;Opt out of AI inferencing entirely&lt;/LI&gt;
&lt;LI&gt;Control whether their skills are shared across the organization&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;EM&gt;19. Can admins control how People Skills is deployed?&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Answer:&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Enable/disable the feature by user, group, or tenant&lt;/LI&gt;
&lt;LI&gt;Control&amp;nbsp;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;inferencing and sharing settings&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;Apply&amp;nbsp;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;privacy and compliance policies&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;Pilot the feature in a limited scope before broad rollout&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;EM&gt;20. What are the main use cases of Workforce Insights Agent?&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Answer: The agent helps with:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Understanding &lt;STRONG&gt;team structure and composition&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Identifying &lt;STRONG&gt;skill gaps and distribution&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Supporting &lt;STRONG&gt;workforce planning decisions&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Driving &lt;STRONG&gt;talent movement and upskilling strategies&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Join us next month on May 26th for our next community call featuring Copilot Hub &amp;amp; Agents 365.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Haven't joined the program yet? It's free! Join here:&amp;nbsp;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Faka.ms%2FM365Champion&amp;amp;data=05%7C01%7Cjhwang%40microsoft.com%7Cb9d47dd0f8754cf3332f08da77135012%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637953222073818900%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;amp;sdata=wO2Lu5LDamXaEYf2Q1kcQ0zR1U0qH4dEWOZ56U0UM7w%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;https://aka.ms/M365Champion&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;Bring your questions to our discussion forum:&amp;nbsp;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Faka.ms%2FDriveAdoption&amp;amp;data=05%7C01%7Cjhwang%40microsoft.com%7Cb9d47dd0f8754cf3332f08da77135012%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637953222073818900%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;amp;sdata=AicZcTcPhECji8pjFyuTrT9ddz9qG9GFplBImafTFe4%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;https://aka.ms/DriveAdoption&lt;/A&gt;.&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Fri, 01 May 2026 22:16:33 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/driving-adoption-blog/enabling-ai-driven-workforce-transformation-with-people-skills/ba-p/4516533</guid>
      <dc:creator>TiffanyLee</dc:creator>
      <dc:date>2026-05-01T22:16:33Z</dc:date>
    </item>
    <item>
      <title>Update to disabling Teams meeting recording expiration notification emails</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-teams/update-to-disabling-teams-meeting-recording-expiration/m-p/4516523#M144887</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thank you to everyone who shared their thoughts and feedback regarding the planned &lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-forum" href="https://techcommunity.microsoft.com/discussions/microsoftteams/upcoming-change-disabling-teams-meeting-recording-expiration-notification-emails/4501038" data-lia-auto-title="Upcoming change: disabling Teams meeting recording expiration notification emails" data-lia-auto-title-active="0" target="_blank"&gt;Upcoming change: disabling Teams meeting recording expiration notification emails&lt;/A&gt;.&amp;nbsp; After carefully reviewing the feedback from this discussion, survey responses, and support channels, we have decided to pause the rollout of this change. The updates originally planned for June 1st will not take effect on that date.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;What this means for you:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;- Email notifications for expired Teams meeting recordings will continue as they do today.&lt;/P&gt;
&lt;P&gt;- No action is required on your part.&lt;/P&gt;
&lt;P&gt;- Recording expiration and deletion policies remain unchanged.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Your input along with ongoing, internal engineering discussions helped shaped this decision. We want to make sure that any changes we make to the notification experience truly work for your organizations, and the feedback we received made it clear that we need more time to get this right.&lt;/P&gt;
&lt;P&gt;We're still committed to improving the notification experience for Teams meeting recordings and will provide updates here and through the Message Center when we have more to share. In the meantime, please continue to share your thoughts in this discussion.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thank you for your patience and for being part of our community.&lt;/P&gt;</description>
      <pubDate>Fri, 01 May 2026 21:12:58 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-teams/update-to-disabling-teams-meeting-recording-expiration/m-p/4516523#M144887</guid>
      <dc:creator>Eddie_Harmon</dc:creator>
      <dc:date>2026-05-01T21:12:58Z</dc:date>
    </item>
    <item>
      <title>Lock down AI, web, and private apps: what’s new in Internet Access and Private Access</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-entra-blog/lock-down-ai-web-and-private-apps-what-s-new-in-internet-access/ba-p/3847825</link>
      <description>&lt;P&gt;&lt;SPAN data-teams="true"&gt;One theme is crystal clear across the security industry&lt;/SPAN&gt;: AI is transforming security, and security must transform with it. Organizations everywhere are embracing generative AI to boost productivity and accelerate innovation. But with this rapid adoption comes new challenges that security teams can’t ignore:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Which AI tools are your employees using?&lt;/LI&gt;
&lt;LI&gt;Is sensitive data being uploaded to unsanctioned services?&lt;/LI&gt;
&lt;LI&gt;How do you prevent AI-specific attacks like prompt injection?&lt;/LI&gt;
&lt;LI&gt;How do you secure private apps without slowing down users?&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;These aren’t hypothetical questions. They’re the reality for every organization today. And the answer starts with &lt;STRONG&gt;identity&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H2&gt;Identity: The foundation for AI and app security&lt;/H2&gt;
&lt;P&gt;Traditional network security was built for a time when users, devices, and applications were mostly on-premises and predictable. Today, employees work from anywhere on any device, and generative AI and SaaS apps often sit outside the corporate perimeter. Static controls struggle to keep pace, creating gaps that increase risk.&lt;/P&gt;
&lt;P&gt;That’s why we built &lt;A href="https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-internet-access" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Microsoft Entra Internet Access&lt;/STRONG&gt;&lt;/A&gt; and &lt;A href="https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-private-access" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Microsoft Entra Private Access&lt;/STRONG&gt;&lt;/A&gt; within the Global Secure Access platform. These solutions extend &lt;STRONG&gt;Zero Trust protections&lt;/STRONG&gt; to web, SaaS, AI, and private-app traffic. They provide the visibility and control organizations need to embrace AI and hybrid work with confidence—without slowing innovation.&lt;/P&gt;
&lt;P&gt;A key capability of Microsoft Entra Internet Access is the &lt;STRONG&gt;Secure Web and AI Gateway&lt;/STRONG&gt;, which applies identity-centric network controls to web and AI traffic. Identity-based network security ties access decisions to the user’s sign-in risk, device posture, and data sensitivity—not just an IP address or network location. This approach delivers consistent protection everywhere users work, reduces risk, and helps organizations scale AI adoption securely across the enterprise.&lt;/P&gt;
&lt;P&gt;Late last year, we introduced most of the capabilities in &lt;A href="https://techcommunity.microsoft.com/blog/microsoft-entra-blog/securing-the-ai-era-starts-with-identity/4478952" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;public preview&lt;/STRONG&gt;&lt;/A&gt; at Microsoft Ignite. Today, we’re excited to share the latest features now generally available in Microsoft Entra Internet Access and Private Access and to announce brand-new capabilities in public preview.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-center"&gt;&lt;EM&gt;Figure 1: Microsoft’s identity-centric Secure Access Service Edge (SASE) solution.&lt;/EM&gt;&lt;/P&gt;
&lt;H3&gt;Public preview: More flexibility for diverse environments&lt;/H3&gt;
&lt;P&gt;&lt;EM&gt;Microsoft Entra Internet Access &amp;amp; Microsoft Entra Private Access&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;We’re introducing new capabilities in public preview, giving you more options to secure every scenario:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/entra/global-secure-access/concept-bring-your-own-device" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;BYOD with client&lt;/STRONG&gt;&lt;/A&gt; in Microsoft Entra Private Access lets you enforce Zero Trust for unmanaged devices, so employees and contractors can securely access private apps without compromising security or user experience.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/entra/global-secure-access/concept-explicit-forward-proxy" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Explicit Forward Proxy for Microsoft Entra Internet Access&lt;/STRONG&gt;&lt;/A&gt;&amp;nbsp;&amp;nbsp; extends secure web access to agentless and legacy devices using PAC file-based proxy configuration.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/entra/global-secure-access/how-to-configure-explicit-forward-proxy-intune-policy" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Secure Browser Integra&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;tion&lt;/STRONG&gt;&lt;STRONG&gt; &lt;/STRONG&gt;enables Intune-managed Microsoft Edge to route internet traffic through Microsoft Entra Internet Access using Explicit Forward Proxy with TLS termination and inspection, delivering deep visibility and policy enforcement for secure browsing.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/entra/global-secure-access/how-to-view-model-context-protocol-logging" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Shadow MCP visibility&lt;/STRONG&gt;&lt;/A&gt; identifies unauthorized or high‑risk MCP servers on the network traffic and surfaces MCP data paths, logs, and observability to help monitor and manage AI‑related risk.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;These new features help you reduce risk across every device type, simplify deployment, and deliver consistent protection everywhere.&lt;/P&gt;
&lt;H3&gt;Now generally available: New AI security capabilities&lt;/H3&gt;
&lt;P&gt;&lt;EM&gt;Microsoft Entra Internet Access&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;AI adoption is accelerating, but so are the risks. Employees often experiment with AI tools without IT approval, creating compliance and data security gaps. With Microsoft Entra Internet Access, you can see what is happening, protect what matters, and simplify how you manage it all.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/entra/global-secure-access/overview-application-usage-analytics" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Shadow AI discovery&lt;/STRONG&gt;&lt;/A&gt; gives you visibility into unsanctioned AI tools and SaaS apps so you can uncover unknown risks and make informed decisions before enforcing policy.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/entra/global-secure-access/how-to-ai-prompt-injection-protection" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Prompt Injection Protection&lt;/STRONG&gt;&lt;/A&gt; helps block malicious prompts that could trick AI models into exposing sensitive data, reducing AI-specific attack risk without slowing innovation.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/entra/global-secure-access/how-to-network-content-filtering" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Network content filtering&lt;/STRONG&gt;&lt;/A&gt; prevents sensitive files from being uploaded to unsanctioned AI services, reducing compliance risk and data loss.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/entra/global-secure-access/how-to-configure-web-content-filtering" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;URL filtering&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt; &lt;/STRONG&gt;and &lt;A href="https://learn.microsoft.com/en-us/entra/global-secure-access/how-to-configure-threat-intelligence" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;threat intelligence&lt;/STRONG&gt;&lt;/A&gt; block access to risky or malicious sites, enforce acceptable use policies, and reduce data leakage.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/entra/global-secure-access/how-to-configure-cloud-firewall" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Cloud firewall&lt;/STRONG&gt; &lt;STRONG&gt;for remote networks&lt;/STRONG&gt;&lt;/A&gt; provides advanced network-layer protection for traffic from remote sites, enabling granular policy enforcement and reducing exposure to threats.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/entra/global-secure-access/how-to-install-ios-client" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;iOS support&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt; &lt;/STRONG&gt;and&lt;STRONG&gt; &lt;/STRONG&gt;&lt;A href="https://learn.microsoft.com/en-us/entra/global-secure-access/how-to-create-remote-networks?tabs=microsoft-entra-admin-center" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;remote network connectivity&lt;/STRONG&gt;&lt;/A&gt; extend protection everywhere your users work.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The result is simple. Your teams can use AI tools to work smarter while you maintain control and reduce risk without introducing friction.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-center"&gt;&lt;EM&gt;Figure2: Demo of prompt injection protection.&lt;/EM&gt;&lt;/P&gt;
&lt;H3&gt;Now generally available: New capabilities for modernizing app connectivity&lt;/H3&gt;
&lt;P&gt;&lt;EM&gt;Microsoft Entra Private Access&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;While Internet Access secures your web and AI traffic, &lt;STRONG&gt;Microsoft Entra Private Access&lt;/STRONG&gt; helps you replace legacy VPNs with Zero Trust Network Access for private apps:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/entra/global-secure-access/concept-external-user-access" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;External User Access &lt;/STRONG&gt;&lt;/A&gt;enforces Zero Trust for partners and contractors, simplifying onboarding while maintaining strong security.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/entra/global-secure-access/enable-intelligent-local-access" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Intelligent Local Access&lt;/STRONG&gt;&lt;/A&gt; improves user experience by routing traffic efficiently, reducing latency, and delivering consistent security without unnecessary backhauling.&lt;/LI&gt;
&lt;LI&gt;The result is a better experience for users and simpler operations for your IT teams.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Ready to secure AI and modernize identity?&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="http://aka.ms/EntraWebinarSeries" target="_blank" rel="noopener"&gt;Watch the Microsoft Entra Showcase webinar&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Watch the&lt;STRONG&gt; &lt;/STRONG&gt;&lt;A href="https://www.youtube.com/watch?v=LaDSrwAOszQ" target="_blank" rel="noopener"&gt;Microsoft Entra Mechanics video&lt;/A&gt; for a deep dive into AI-aware protections&lt;/LI&gt;
&lt;LI&gt;Start your journey today: &lt;A href="https://techcommunity.microsoft.com/t5/aka.ms/EntraSuiteTrial" target="_blank" rel="noopener"&gt;Entra Suite Trial&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;-Sinead O’Donovan | VP of Product Management, Identity and Network Access&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.linkedin.com/in/sineadco/" target="_blank" rel="noopener"&gt;Sinead O'Donovan | LinkedIn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Additional resources&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://techcommunity.microsoft.com/blog/microsoft-entra-blog/securing-the-ai-era-starts-with-identity/4478952" target="_blank" rel="noopener"&gt;Securing the AI era starts with identity&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-internet-access" target="_blank" rel="noopener"&gt;Microsoft Entra Internet Access | Microsoft Security&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-private-access" target="_blank" rel="noopener"&gt;https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-private-access&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Learn more about Microsoft Entra &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://www.microsoft.com/en-us/security/blog/products/microsoft-entra/" target="_blank" rel="noopener"&gt;Microsoft Entra News and Insights | Microsoft Security Blog&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://techcommunity.microsoft.com/t5/microsoft-entra-blog/bg-p/Identity" target="_blank" rel="noopener"&gt;Microsoft Entra blog | Tech Community&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/entra/" target="_blank" rel="noopener"&gt;Microsoft Entra documentation | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A style="font-style: normal; font-weight: 400; background-color: rgb(255, 255, 255);" href="https://techcommunity.microsoft.com/t5/microsoft-entra/bd-p/Azure-Active-Directory" target="_blank" rel="noopener"&gt;Microsoft Entra discussions | Microsoft Community&amp;nbsp;&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Fri, 01 May 2026 20:56:02 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-entra-blog/lock-down-ai-web-and-private-apps-what-s-new-in-internet-access/ba-p/3847825</guid>
      <dc:creator>Sinead_ODonovan</dc:creator>
      <dc:date>2026-05-01T20:56:02Z</dc:date>
    </item>
    <item>
      <title>High-Fidelity Network Observability at Scale— ACNS Metrics Filtering and Log Aggregation Now GA</title>
      <link>https://techcommunity.microsoft.com/t5/azure-networking-blog/high-fidelity-network-observability-at-scale-acns-metrics/ba-p/4516508</link>
      <description>&lt;P&gt;We are thrilled to announce that Advanced Container Networking Services (ACNS) for Azure Kubernetes Service (AKS) now delivers two powerful observability features in General Availability: container network metrics filtering and container network log filtering and aggregation. Together, these capabilities set a new standard for Kubernetes network observability, giving you high-fidelity visibility at dramatically lower cost and noise. These capabilities fundamentally redefine how network observability works at scale while delivering up to 97% cost reduction.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Why this is a Milestone?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Most Kubernetes observability solutions face a fundamental tension: collect everything and drown in noise and cost, or sample and miss the signals that matter. ACNS breaks that tradeoff.&lt;/P&gt;
&lt;P&gt;With this release, Azure becomes the first cloud provider to deliver on-node metrics filtering and flow log aggregation for Kubernetes networking, capabilities now also contributed to the upstream Hubble project, making them available to the broader open-source community.&lt;/P&gt;
&lt;P&gt;For AKS customers running Cilium-based clusters, this means:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Every flow you care about is captured. Everything else is dropped at the source.&lt;/LI&gt;
&lt;LI&gt;Log volume is compressed by up to 45% through aggregation, without losing security verdicts or error context.&lt;/LI&gt;
&lt;LI&gt;Costs scale with what you monitor, not with cluster size.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;What’s been improved in ACNS observability?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;This release introduces two capabilities that work together: container network metrics filtering and container network log filtering and aggregation. Both are available on AKS clusters with the Cilium data plane and give you precise controls to keep observability costs predictable while maintaining the visibility you need.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;Container Network Metrics Filtering&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Container network metrics are generated for all pods by default whenever ACNS is enabled. With metrics filtering, you now control what gets collected at the point of ingestion, on the node, before anything is scraped or transmitted.&lt;/P&gt;
&lt;P&gt;A single ContainerNetworkMetric CRD per cluster defines which metric types (dns, flow, tcp, drop), namespaces, pod labels, and protocols to ingest. It supports both include and exclude filters, so you can maintain broad collection while carving out specific workloads or namespaces. Anything that doesn't match is dropped on the node. Changes reconcile in a few seconds, with no Cilium agent or Prometheus restarts required.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;Container Network Log Filtering and Aggregation&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Unlike metrics, container network logs are not generated automatically. You start capturing network flows only after applying a ContainerNetworkLog CRD that defines exactly which traffic to capture-by namespace, pod, service, protocol, or verdict. Only matching flows are logged, giving you a precise, targeted view rather than a fire hose.&lt;/P&gt;
&lt;P&gt;This is where Azure's first-to-market innovation comes in. Flow log aggregation, now built into ACNS and contributed upstream to Hubble for the open-source community, groups similar flows into summarized records every 30 seconds. The result is dramatically reduced data volume while preserving security verdicts, service identity, and error context. What previously required custom post-processing pipelines is now built directly into the platform before storage costs are incurred.&lt;/P&gt;
&lt;P&gt;Every matched flow log captures: source and destination pods, namespaces, ports, protocols, traffic direction, and policy verdicts.&lt;/P&gt;
&lt;P&gt;Logs are stored in a Log Analytics workspace (ContainerNetworkLogs table) with a choice of using the Analytics or Basic tier. Built-in Azure portal dashboards are available for both tiers. Logs can also be exported to external log collectors such as Splunk or Datadog.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;First to Market:&amp;nbsp;&lt;/STRONG&gt;&lt;STRONG&gt;Azure and the upstream Hubble Contribution&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;ACNS's filtering and aggregation capabilities were engineered from the ground up to solve real production observability challenges at scale. Rather than keeping this innovation proprietary, Azure contributed the log aggregation and filtering capabilities to the upstream Hubble project, the observability layer of the Cilium ecosystem.&lt;/P&gt;
&lt;P&gt;This means:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;AKS customers get a fully managed, Azure-native experience with portal dashboards, Log Analytics integration, and Grafana visualization, out of the box.&lt;/LI&gt;
&lt;LI&gt;The broader open-source community gains access to the same filtering and aggregation primitives through upstream Hubble.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Azure is the first to ship this capability in a managed Kubernetes service, and the first to give it back to the community.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Key Benefits&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;💰 Lower observability cost. Metrics filtering drops unwanted data on the node before Prometheus ever scrapes it. Flow log aggregation compresses log data by up to 97% in lab testing. Your cost scales with what you choose to monitor, not with cluster size.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;📉 Less noise, more signal. Metrics filtering carves out the namespaces and workloads that matter, so dashboards show only relevant signals. Log filters scope collection to specific pods and verdicts. Engineers start every investigation with data that's already relevant.&lt;/P&gt;
&lt;P&gt;⚡ Faster root-cause isolation. Every metric carries source and destination pod context. Targeted flow logs add the forensic detail, which policy, destination, or port is involved. Together, they cut mean time to resolution from hours of guesswork to minutes of structured investigation.&lt;/P&gt;
&lt;P&gt;🔒 Full signal, zero gaps. ACNS doesn't sample. Within the scope you define, every flow is captured and every pattern is preserved. Aggregation compresses volume without losing security verdicts or error context.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Who Benefits&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Platform engineers managing multi-tenant clusters can scope data collection per namespace, so each team gets visibility into their own traffic without contributing to a shared cost pool.&lt;/P&gt;
&lt;P&gt;SREs can isolate packet drops, TCP resets, or DNS failures to a specific workload in minutes, starting with data that's already scoped to what matters.&lt;/P&gt;
&lt;P&gt;Decision-makers evaluating observability spend get predictable, controllable ingestion costs that scale with intent, not infrastructure size.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;How to optimize ACNS metrics and logs with filtering?&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Enable ACNS on your AKS cluster with the Cilium data plane:&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;EM&gt;az aks create --enable-acns&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Or on an existing cluster:&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;az aks update --resource-group $RESOURCE_GROUP --name $CLUSTER --enable-acns&lt;/EM&gt;&lt;/P&gt;
&lt;OL start="2"&gt;
&lt;LI&gt;Apply a ContainerNetworkMetric CRD to filter which metrics are collected on each node. Start by excluding noisy system namespaces, then scope to business-critical workloads.&lt;/LI&gt;
&lt;LI&gt;Apply a ContainerNetworkLog CRD to define which flows to capture.&lt;/LI&gt;
&lt;LI&gt;Enable Azure Monitor integration with --enable-container-network-logs to send logs to a Log Analytics workspace, or export logs from the node to an external logging system such as Splunk or Datadog.&lt;/LI&gt;
&lt;LI&gt;Check your dashboards. Open your cluster in the Azure portal and go to Monitor &amp;gt; Insights &amp;gt; Networking for bytes, drops, DNS errors, and flows. For flow logs, use the built-in Azure portal dashboards available for both Basic and Analytics tiers.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;div data-video-id="https://www.youtube.com/watch?v=3QmJ4HwNK54/1777662026234" data-video-remote-vid="https://www.youtube.com/watch?v=3QmJ4HwNK54/1777662026234" class="lia-video-container lia-media-is-center lia-media-size-large"&gt;&lt;iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F3QmJ4HwNK54%3Ffeature%3Doembed&amp;amp;display_name=YouTube&amp;amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D3QmJ4HwNK54&amp;amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F3QmJ4HwNK54%2Fhqdefault.jpg&amp;amp;type=text%2Fhtml&amp;amp;schema=youtube" allowfullscreen="" style="max-width: 100%"&gt;&lt;/iframe&gt;&lt;/div&gt;
&lt;P&gt;&lt;STRONG&gt;Conclusion&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Kubernetes network observability has long meant choosing between visibility and cost. With container network metrics filtering and log filtering and aggregation now GA in ACNS and contributed to upstream Hubble for the open-source community, that tradeoff is gone.&lt;/P&gt;
&lt;P&gt;Azure is first to market with this capability. AKS customers get it fully managed, out of the box, with built-in dashboards with Log Analytics integration. And the broader Cilium ecosystem gets it through upstream Hubble.&lt;/P&gt;
&lt;P&gt;High-fidelity visibility. Lower cost. No compromise.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Learn more:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Container network metrics overview:&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/container-network-observability-metrics?tabs=Cilium" target="_blank" rel="noopener"&gt;Container network metrics overview - Azure Kubernetes Service | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Container network logs overview:&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/container-network-observability-logs" target="_blank" rel="noopener"&gt;Container Network Logs Overview - Azure Kubernetes Service | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Configure container network metrics filtering: &lt;A href="https://learn.microsoft.com/en-us/azure/aks/how-to-configure-container-network-metrics-filtering" target="_blank" rel="noopener"&gt;Configure Container network metrics filtering for Azure Kubernetes Service (AKS) - Azure Kubernetes Service | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Set up container network logs: &lt;A href="https://learn.microsoft.com/en-us/azure/aks/how-to-configure-container-network-logs?tabs=cli%2Ccilium" target="_blank" rel="noopener"&gt;Set up container network logs - Azure Kubernetes Service | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Fri, 01 May 2026 20:29:20 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-networking-blog/high-fidelity-network-observability-at-scale-acns-metrics/ba-p/4516508</guid>
      <dc:creator>chandanAggarwal</dc:creator>
      <dc:date>2026-05-01T20:29:20Z</dc:date>
    </item>
    <item>
      <title>Join us May 6: Learn how Microsoft Teams is helping SMBs thrive</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-teams-for-small-and/join-us-may-6-learn-how-microsoft-teams-is-helping-smbs-thrive/ba-p/4516515</link>
      <description>&lt;P&gt;Running a small or medium-sized business means wearing every hat at once — sales lead, customer support, IT admin, and marketing team, often before lunch. The tools you rely on need to keep up without getting in the way. That's exactly the conversation we're hosting next week.&lt;/P&gt;
&lt;P&gt;On&amp;nbsp;&lt;SPAN data-streamdown="strong"&gt;Wednesday, May 6&lt;/SPAN&gt;, the Microsoft Teams SMB product team is going live for a focused 40-minute session designed specifically for small and medium businesses.&lt;/P&gt;
&lt;H2 data-streamdown="heading-2"&gt;What you'll learn&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-streamdown="strong"&gt;Angela Chin, Principal PM Manager&lt;/SPAN&gt;&amp;nbsp;on the Teams SMB team, will walk through the latest updates in Microsoft Teams built with smaller businesses in mind:&lt;/P&gt;
&lt;UL data-streamdown="unordered-list"&gt;
&lt;LI data-streamdown="list-item"&gt;&lt;SPAN data-streamdown="strong"&gt;Get going faster&lt;/SPAN&gt;&amp;nbsp;— onboarding improvements that take SMBs from sign-up to collaboration in minutes.&lt;/LI&gt;
&lt;LI data-streamdown="list-item"&gt;&lt;SPAN data-streamdown="strong"&gt;Deeper customer connections&lt;/SPAN&gt;&amp;nbsp;— features that turn Teams into a front door for your customers.&lt;/LI&gt;
&lt;LI data-streamdown="list-item"&gt;&lt;SPAN data-streamdown="strong"&gt;Effortless cross-business collaboration&lt;/SPAN&gt;&amp;nbsp;— work with vendors, contractors, and partner businesses as naturally as your own team.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2 data-streamdown="heading-2"&gt;Bring your questions&lt;/H2&gt;
&lt;P&gt;Connect directly with the Teams SMB product team, ask questions, and share feedback that shapes the roadmap.&lt;/P&gt;
&lt;H2 data-streamdown="heading-2"&gt;Who should attend&lt;/H2&gt;
&lt;P&gt;SMB owners and operators, IT pros supporting smaller businesses, partners and consultants, and anyone evaluating Teams.&lt;/P&gt;
&lt;H2 data-streamdown="heading-2"&gt;Save your seat&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-streamdown="strong"&gt;Wednesday, May 6, 2026, 8:05–8:45 AM PT&lt;/SPAN&gt;. Free to attend.&lt;/P&gt;
&lt;P&gt;👉&amp;nbsp;&lt;A class="lia-external-url" href="https://df.events.teams.microsoft.com/event/df.b39db975-142a-43be-ba7f-9736c429f8b3@72f988bf-86f1-41af-91ab-2d7cd011db47" target="_blank" rel="noopener"&gt;&lt;SPAN data-streamdown="strong"&gt;Register here&lt;/SPAN&gt;&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 01 May 2026 20:15:09 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-teams-for-small-and/join-us-may-6-learn-how-microsoft-teams-is-helping-smbs-thrive/ba-p/4516515</guid>
      <dc:creator>MiikkaOksanen</dc:creator>
      <dc:date>2026-05-01T20:15:09Z</dc:date>
    </item>
    <item>
      <title>Microsoft ODBC Driver 17.11.1 for SQL Server Released</title>
      <link>https://techcommunity.microsoft.com/t5/sql-server-blog/microsoft-odbc-driver-17-11-1-for-sql-server-released/ba-p/4516510</link>
      <description>&lt;P&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;We are pleased to announce the general availability of&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;Microsoft ODBC Driver 17.11.1 for SQL Server&lt;/STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;, released on April 30, 2025. This servicing update delivers important bug fixes and expands Linux platform support.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 data-line="4"&gt;Key Highlights&lt;/H2&gt;
&lt;UL data-line="6"&gt;
&lt;LI data-line="6"&gt;Stability and correctness fixes for parameter array processing, including accurate updates to SQL_ATTR_PARAMS_PROCESSED_PTR and improved row counting when SQL_PARAM_IGNORE is used in parameter arrays.&lt;/LI&gt;
&lt;LI data-line="7"&gt;Fixed a connection error that could occur when processing Data Classification metadata in ODBC asynchronous mode.&lt;/LI&gt;
&lt;LI data-line="8"&gt;Updated RPM packaging rules to allow installation of multiple driver versions side by side.&lt;/LI&gt;
&lt;LI data-line="9"&gt;Corrected XA recovery to ensure proper computation of transaction IDs and recovery of missing transactions.&lt;/LI&gt;
&lt;LI data-line="10"&gt;Debian package installation now honors license acceptance for successful completion.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2 data-line="12"&gt;New Platform Support&lt;/H2&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Platform&lt;/th&gt;&lt;th&gt;Versions&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;macOS&lt;/td&gt;&lt;td&gt;14, 15, 26&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Debian&lt;/td&gt;&lt;td&gt;13&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Red Hat Enterprise Linux&lt;/td&gt;&lt;td&gt;10&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Oracle Linux&lt;/td&gt;&lt;td&gt;9, 10&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;SUSE Linux Enterprise Server&lt;/td&gt;&lt;td&gt;16&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Ubuntu&lt;/td&gt;&lt;td&gt;24.04, 25.10&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Alpine Linux&lt;/td&gt;&lt;td&gt;3.21, 3.22, 3.23&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H2 data-line="4"&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;Download&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P data-line="29"&gt;The driver is available for download from the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/sql/connect/odbc/download-odbc-driver-for-sql-server" data-href="https://learn.microsoft.com/en-us/sql/connect/odbc/download-odbc-driver-for-sql-server" target="_blank"&gt;Microsoft ODBC Driver for SQL Server documentation page&lt;/A&gt;.&lt;/P&gt;
&lt;H3 data-line="31"&gt;Linux Installation&lt;/H3&gt;
&lt;P data-line="33"&gt;Install or update using your distribution's package manager:&lt;/P&gt;
&lt;P data-line="35"&gt;&lt;STRONG&gt;Debian/Ubuntu:&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;sudo apt-get update sudo apt-get install msodbcsql17&lt;/LI-CODE&gt;
&lt;P data-line="41"&gt;&lt;STRONG&gt;Red Hat/Oracle Linux:&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;sudo yum install msodbcsql17&lt;/LI-CODE&gt;
&lt;P data-line="46"&gt;&lt;STRONG&gt;SUSE:&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;sudo zypper install msodbcsql17&lt;/LI-CODE&gt;
&lt;P data-line="51"&gt;&lt;STRONG&gt;Alpine:&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;sudo apk add msodbcsql17&lt;/LI-CODE&gt;
&lt;H2 data-line="56"&gt;Feedback&lt;/H2&gt;
&lt;P data-line="58"&gt;We welcome your feedback. Please report issues on the&amp;nbsp;&lt;A href="https://aka.ms/sqlfeedback" data-href="https://aka.ms/sqlfeedback" target="_blank"&gt;SQL Server feedback site&lt;/A&gt;&amp;nbsp;or open an issue on the&amp;nbsp;&lt;A href="https://github.com/microsoft/ODBC-Driver-for-SQL-Server/issues" data-href="https://github.com/microsoft/ODBC-Driver-for-SQL-Server/issues" target="_blank"&gt;ODBC Driver GitHub repository&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Fri, 01 May 2026 20:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/sql-server-blog/microsoft-odbc-driver-17-11-1-for-sql-server-released/ba-p/4516510</guid>
      <dc:creator>DavidLevy</dc:creator>
      <dc:date>2026-05-01T20:00:00Z</dc:date>
    </item>
    <item>
      <title>Migrating frontline mobile devices: Aligning stakeholders before real-world testing</title>
      <link>https://techcommunity.microsoft.com/t5/intune-customer-success/migrating-frontline-mobile-devices-aligning-stakeholders-before/ba-p/4516511</link>
      <description>&lt;P&gt;&lt;STRONG&gt;By: Carol Burns - Principal Product Manager | Microsoft Intune and Sucheta Gawade, Microsoft MVP (Azure &amp;amp; Security / Intune)&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Practitioner perspective from Sucheta Gawade, Microsoft MVP (Azure &amp;amp; Security / Intune), with deep experience in secure frontline mobility, including regulated healthcare environments.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the previous article, we focused on &lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/intunecustomersuccess/migrating-frontline-mobile-devices-understanding-the-reality-of-your-estate/4511683" target="_blank" rel="noopener" data-lia-auto-title="understanding the reality of your frontline device estate " data-lia-auto-title-active="0"&gt;understanding the reality of your frontline device estate &lt;/A&gt;- what devices you have, how they’re used, and which tasks they must support. Now that discovery is complete, the next step is to assess what you’ve found and align your people and processes before beginning real‑world testing with Microsoft Intune and representative users and devices. This is where you turn discovery into an actionable plan your team can execute in real operational conditions.&lt;/P&gt;
&lt;P&gt;Many organizations refer to this stage as a Proof of Concept (POC) or pilot. In this article, we use these terms to describe limited real‑world validation of frontline workflows with representative users and devices, rather than internal IT feasibility testing. Use the pilot to confirm that users can reliably complete critical tasks in live operational environments before wider rollout.&lt;/P&gt;
&lt;H2&gt;Translate discovery into decisions&lt;/H2&gt;
&lt;P&gt;Discovery produces facts, but readiness requires decisions. Before beginning real‑world testing with representative users and devices, your team should be able to answer questions like:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Are we migrating “as‑is,” or do we plan on correcting identity and usage anti‑patterns such as shared credentials or personal use on corporate devices?&lt;/LI&gt;
&lt;LI&gt;Which workflows are non‑negotiable and must work on day one, and which can be improved later?&lt;/LI&gt;
&lt;LI&gt;Do we need to refresh hardware now, or can we migrate current devices and plan standardization at refresh time?&lt;/LI&gt;
&lt;LI&gt;What are our top constraints (OS support, connectivity, etc.)?&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;A useful way to structure discovery output is to categorize findings and determine whether they support limited real‑world testing or require further alignment before proceeding.&lt;/P&gt;
&lt;P&gt;Pre‑requisites for real‑world testing typically include clear ownership of devices and apps, supported OS versions, and a manageable device or OEM mix.&lt;/P&gt;
&lt;P&gt;Items that often require alignment before real‑world testing include shared devices without a defined shared‑device model, shared credentials or unclear authentication approaches, personal use on corporate devices (which affects wipe/re‑enroll decisions), certified app or peripheral constraints, and network or certificate dependencies that could impact enrollment and compliance.&lt;/P&gt;
&lt;H2&gt;Identify the stakeholders you must align (and why)&lt;/H2&gt;
&lt;P&gt;Real‑world testing of frontline workflows depends on more than technical readiness. A clear stakeholder map helps surface operational dependencies early and ensures that limited validation activities can be conducted safely without disrupting day‑to‑day work.&lt;/P&gt;
&lt;P&gt;Not every environment requires all of the roles listed below at this stage, but these are the most common stakeholders needed to support limited real‑world testing of frontline workflows.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Operational stakeholders&lt;/STRONG&gt;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table class="lia-border-style-solid" border="1" style="width: 100%; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Stakeholder&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Why they matter&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;What to align&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Operations / business leadership&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Define frontline outcomes and approve change windows.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Critical workflows, downtime tolerance, shift patterns, pilot locations, operational sign‑off criteria.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Funding owners / procurement&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Discovery often uncovers refresh or licensing gaps.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Device and accessory funding, carrier plans, spares, and standardization strategy.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Change management&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Testing may introduce new sign‑in flows or device behaviors.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Communications plan, support readiness, rollback and escalation processes, exception management.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 33.33%" /&gt;&lt;col style="width: 33.33%" /&gt;&lt;col style="width: 33.33%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&lt;BR /&gt;In addition to operational alignment, technical readiness across supporting IT teams is required to ensure testing reflects production like conditions.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;BR /&gt;Technical and support stakeholders&lt;/STRONG&gt;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table class="lia-border-style-solid" border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Stakeholder&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Why they matter&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;What to align&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Endpoint or Microsoft Intune owners&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Build policy, enrollment, apps, and compliance.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Device categories, management models, policy approach, rollout waves.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Architecture team&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Ensure alignment with enterprise standards.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Reference architecture, lifecycle approach, dependency mapping.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Microsoft Identity / Microsoft Entra team&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Underpins Conditional Access and shared‑device patterns.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Authentication model, shared device sign‑in patterns, break‑glass scenarios.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Network team&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Enrollment depends on connectivity and certificate flows.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Wi‑Fi (EAP‑TLS), proxies, segmentation, roaming, known dead zones.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Security / risk / compliance&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Define guardrails and exceptions.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Wipe policies, logging, least privilege, auditability.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;App owners / vendors&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Critical frontline workflows depend on app behavior.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Compatibility, offline behavior, deployment approach.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Support / service Desk&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Manage user impact during testing.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Runbooks, escalation paths, enrollment troubleshooting, shift‑based support.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Project management (large environments)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Coordinate testing across teams.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Timeline, risk tracking, cross‑team communications.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 33.33%" /&gt;&lt;col style="width: 33.33%" /&gt;&lt;col style="width: 33.33%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H2&gt;Readiness checklist before real‑world testing&lt;/H2&gt;
&lt;P&gt;Real‑world testing often produces limited value when it focuses primarily on Microsoft Intune enrollment rather than operational use. Enrollment is a starting point, but the goal of this stage is to confirm that critical frontline workflows function reliably end‑to‑end in production‑like conditions.&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table class="lia-border-style-solid" border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Readiness area&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Questions to consider before real‑world testing&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Licensing&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Do you have the correct Intune licenses for the devices or users in scope?&lt;BR /&gt;Are any add-ons needed?&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Identity&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Is Microsoft Entra configured for your enrollment approach?&lt;BR /&gt;Are Conditional Access policies ready for real‑world testing?&lt;/P&gt;
&lt;P&gt;For shared devices, what sign‑in model will you use?&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Stakeholder alignment&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Who owns the success criteria? &lt;BR /&gt;Who approves the testing scope and change window? &lt;BR /&gt;Who funds required accessories or device refresh?&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Operational readiness&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Who provides day‑to‑day support for test devices? &lt;BR /&gt;What is the escalation path for a broken critical workflow? &lt;BR /&gt;What is the rollback or recovery plan?&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Device lifecycle decisions&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Will you test by migrating existing devices as‑is, replacing end‑of‑life devices first, or using testing to define the future standard?&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;OEM and ecosystem readiness&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Are the devices still supported by the OEM? &lt;BR /&gt;Are required peripherals supported?&lt;BR /&gt;Do rugged or certified requirements limit device options?&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&lt;BR /&gt;Lack of clear ownership for testing success criteria is a common cause of inconclusive pilots, particularly where operational workflows span multiple teams.&lt;/P&gt;
&lt;H2&gt;Decide what your real‑world testing must validate&lt;/H2&gt;
&lt;P&gt;Real‑world testing often produces limited value when it focuses primarily on enrollment rather than operational use. Enrollment is a starting point, but the goal of this stage is to confirm that critical frontline workflows function reliably end‑to‑end in production‑like conditions.&lt;/P&gt;
&lt;P&gt;Real‑world testing should validate high‑value operational outcomes, ensuring:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt; &lt;/STRONG&gt;&lt;STRONG&gt;Critical workflows function end&lt;/STRONG&gt;‑&lt;STRONG&gt;to&lt;/STRONG&gt;‑&lt;STRONG&gt;end&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;Scanning, inventory, delivery confirmation, POS, etc.&lt;/LI&gt;
&lt;LI&gt;Session transitions match shift patterns&lt;/LI&gt;
&lt;LI&gt;Offline or degraded‑mode behavior works as expected (where relevant)&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt; &lt;/STRONG&gt;&lt;STRONG&gt;Security works without disrupting operations&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;Compliance and Conditional Access do not block legitimate frontline activity&lt;/LI&gt;
&lt;LI&gt;Wipe and recovery processes are realistic for shared devices&lt;/LI&gt;
&lt;LI&gt;App protection controls align with user experience&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt; &lt;/STRONG&gt;&lt;STRONG&gt;Supportability is operationally viable&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;Device reset and re‑enroll processes are documented&lt;/LI&gt;
&lt;LI&gt;Troubleshooting steps are known and repeatable&lt;/LI&gt;
&lt;LI&gt;Escalation paths exist for frontline‑impacting incidents&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt; &lt;/STRONG&gt;&lt;STRONG&gt;Representative device scenarios are included&lt;BR /&gt;&lt;/STRONG&gt;Include the different frontline scenarios identified during discovery, such as:
&lt;UL&gt;
&lt;LI&gt;Shared vs assigned devices&lt;/LI&gt;
&lt;LI&gt;Different OEM models or OS versions&lt;/LI&gt;
&lt;LI&gt;Sites with known connectivity constraintso&lt;/LI&gt;
&lt;LI&gt;Common peripherals that may introduce migration risk (for example, scanners or printers)&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;Plan for future standardization (without delaying testing)&lt;/H2&gt;
&lt;P&gt;You may need to begin real‑world testing using the environment you have today. However, this stage can also be used to identify patterns that may shape future procurement and standardization decisions without delaying validation activities.&lt;/P&gt;
&lt;P&gt;Practical prompts to add to your planning:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;If you could reset procurement going forward, would you reduce OEM or device model sprawl?&lt;/LI&gt;
&lt;LI&gt;What might your target “approved device set” look like for the next refresh cycle?&lt;/LI&gt;
&lt;LI&gt;Which procurement models could support consistent enrollment, warranty coverage, and access to spares across shifts?&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Standardization doesn’t need to be a prerequisite for real‑world testing, but it can become a valuable outcome of the migration effort over time.&lt;/P&gt;
&lt;H2&gt;Moving from assessment to real‑world testing&lt;/H2&gt;
&lt;P&gt;After you’ve aligned stakeholders, clarified dependencies, and defined what your real‑world testing must validate, you’re ready to move from assessment to limited operational testing with representative users and devices.&lt;/P&gt;
&lt;P&gt;The key takeaway is this:&amp;nbsp;&lt;STRONG&gt;discovery tells you what’s real, but readiness determines whether you can safely test it in live operational conditions.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;As always, we welcome your feedback and experience. If you’ve already tested frontline workflows in operational conditions, what advice would you give organizations preparing for this stage? Share your thoughts in the comments below or reach out to us on X&amp;nbsp;&lt;A class="lia-external-url" href="https://aka.ms/IntuneSuppTeam" target="_blank" rel="noopener"&gt;@IntuneSuppTeam&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;Explore the&amp;nbsp;&lt;A class="lia-external-url" href="https://aka.ms/Intune/FLW-home" target="_blank" rel="noopener"&gt;From the frontlines: Frontline worker management with Microsoft Intune&lt;/A&gt; series for additional guidance on managing frontline workers and devices.&lt;/P&gt;</description>
      <pubDate>Fri, 01 May 2026 19:37:13 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/intune-customer-success/migrating-frontline-mobile-devices-aligning-stakeholders-before/ba-p/4516511</guid>
      <dc:creator>Intune_Support_Team</dc:creator>
      <dc:date>2026-05-01T19:37:13Z</dc:date>
    </item>
    <item>
      <title>Title Plan Update - May 1, 2026</title>
      <link>https://techcommunity.microsoft.com/t5/ilt-communications-blog/title-plan-update-may-1-2026/ba-p/4516500</link>
      <description>&lt;img /&gt;
&lt;H4&gt;📁&lt;STRONG&gt;May 1, 2026 - Title Plan Now Available&lt;/STRONG&gt;&lt;/H4&gt;
&lt;H4&gt;Access the latest Instructor-Led Training (ILT) updates anytime at&amp;nbsp;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Faka.ms%2FCourseware_Title_Plan&amp;amp;data=05%7C02%7Cv-aslyman%40microsoft.com%7C17dfe1a6ad1f49c1839208de4a304d49%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C639029768499335587%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=T0XSvelxN%2BvWJR1Q%2F%2B791m03a4Trajflams4GR2%2FlK4%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;http://aka.ms/Courseware_Title_Plan&lt;/A&gt;&amp;nbsp;to ensure you're always working from the most current version.&lt;/H4&gt;
&lt;P&gt;📌&amp;nbsp;&lt;EM&gt;Reminder: To help you stay informed more quickly and consistently, we’ve moved to a weekly publishing cadence for the title plan. This means each update may include fewer changes but ensures you’re always up to date.&lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 01 May 2026 18:27:44 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/ilt-communications-blog/title-plan-update-may-1-2026/ba-p/4516500</guid>
      <dc:creator>anbordianu</dc:creator>
      <dc:date>2026-05-01T18:27:44Z</dc:date>
    </item>
    <item>
      <title>Databricks Lakebase: The operational database for AI agents and apps</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-mission-critical-blog/databricks-lakebase-the-operational-database-for-ai-agents-and/ba-p/4516497</link>
      <description>&lt;H2&gt;Understanding the Evolution: From Lakehouse to Lakebase&lt;/H2&gt;
&lt;P&gt;The modern data landscape has long been characterized by a fundamental schism: Online Transaction Processing (OLTP) systems, designed for high-frequency, low-latency transactions in applications, and Online Analytical Processing (OLAP) systems, optimized for complex queries, reporting, and machine learning on vast datasets. This division historically necessitated intricate and often fragile Extract, Transform, Load (ETL) processes to move and synchronize data between these disparate environments, leading to increased complexity, data duplication, and governance challenges.&lt;/P&gt;
&lt;P&gt;Databricks Lakehouse architecture emerged to unify data warehousing and data lake f&lt;/P&gt;
&lt;P&gt;unctionalities for analytical workloads, offering the flexibility of data lakes with the performance and governance of data warehouses. However, a critical piece remained: native, high-performance OLTP capabilities directly within this unified environment. This is where Databricks Lakebase enters the picture, representing a significant evolution by bringing fully managed PostgreSQL OLTP capabilities directly into the Databricks Data Intelligence Platform.&lt;/P&gt;
&lt;P&gt;Lakebase addresses the need for a single, governed platform that can seamlessly handle both transactional and analytical workloads, thereby simplifying data architectures, reducing operational overhead, and accelerating the development of real-time applications and AI agents. By integrating OLTP at the core of the lakehouse, Databricks aims to create a truly unified data and AI platform.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img&gt;&lt;EM&gt;1.Visualizing the architectural shift: Lakebase integrates seamlessly within the Databricks Lakehouse ecosystem.&lt;/EM&gt;&lt;/img&gt;
&lt;H3&gt;The Architectural Innovation: Separation of Compute and Storage&lt;/H3&gt;
&lt;P&gt;At the heart of Databricks Lakebase's efficiency and scalability lies its innovative architecture, which fundamentally separates compute from storage. Unlike traditional monolithic databases where these components are tightly coupled, Lakebase decouples them, offering distinct advantages:&lt;/P&gt;
&lt;H4&gt;Elastic Scaling and Cost Efficiency&lt;/H4&gt;
&lt;P&gt;The transactional compute layer in Lakebase is serverless and ephemeral, meaning it can scale up or down dynamically based on demand. This includes the ability to scale to zero during periods of inactivity, significantly optimizing cost by ensuring you only pay for the compute resources actively used. Data, on the other hand, is persisted directly into low-cost, durable cloud object storage (e.g., Azure Blob Storage) using open formats like Delta Lake. This design not only reduces storage costs but also prevents vendor lock-in and allows other engines within the Databricks platform to access the data directly.&lt;/P&gt;
&lt;H4&gt;Open Data Formats and Interoperability&lt;/H4&gt;
&lt;P&gt;By storing data in open formats, Lakebase ensures high interoperability within the Databricks ecosystem and beyond. This approach eliminates the need for complex and time-consuming ETL processes to move transactional data to the analytical layer, as the data is inherently accessible to both. This foundational integration streamlines data pipelines and provides a unified view of data across all workloads.&lt;/P&gt;
&lt;H3&gt;Key Technical Capabilities and Features&lt;/H3&gt;
&lt;P&gt;Databricks Lakebase offers a rich set of features that make it a compelling solution for modern data architectures:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;PostgreSQL Compatibility:&lt;/STRONG&gt;&amp;nbsp;Lakebase provides full PostgreSQL semantics, including ACID transactions, indexing capabilities, and support for standard JDBC/psql clients. This familiarity allows developers to leverage existing skills and tools, minimizing the learning curve.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Fully Managed Service:&lt;/STRONG&gt;&amp;nbsp;Databricks handles the complexities of provisioning, scaling, patching, backups, and ensuring high availability, freeing up development teams to focus on application logic rather than database administration.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Managed Change Data Capture (CDC):&lt;/STRONG&gt;&amp;nbsp;A crucial feature, managed CDC ensures that operational data in Lakebase remains synchronized with Delta Lake tables for analytical consumption. This continuous synchronization is vital for keeping BI models and AI applications updated with the freshest transactional data.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Autoscaling (Lakebase Autoscaling):&lt;/STRONG&gt;&amp;nbsp;The latest iteration of Lakebase features intelligent autoscaling of compute resources. It dynamically adjusts Compute Units (CU) based on various metrics like CPU load, memory usage, and working set size, preventing performance bottlenecks and out-of-memory (OOM) issues. It also supports branching and instant restore, enhancing developer agility and operational resilience.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Databricks Apps Synergy:&lt;/STRONG&gt;&amp;nbsp;Lakebase is designed to serve as the transactional backend for Databricks Apps, enabling the creation and deployment of interactive applications directly on the platform, leveraging governed data and powerful analytics.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Governance, Security, and Cost Efficiency with Lakebase&lt;/H2&gt;
&lt;P&gt;Adopting Databricks Lakebase brings significant benefits in terms of data governance, security, and overall cost management, aligning with the principles of a modern data intelligence platform.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img&gt;&lt;EM&gt;2.Reverse ETL with Lakebase simplifies data activation for operational analytics.&lt;/EM&gt;&lt;/img&gt;
&lt;H3&gt;Unified Governance through Unity Catalog&lt;/H3&gt;
&lt;P&gt;One of Lakebase's most powerful integrations is with Unity Catalog, Databricks' unified governance solution. This integration provides a single pane of glass for managing data assets across the entire Databricks Data Intelligence Platform. Lakebase databases can be registered as catalogs within Unity Catalog, extending its robust governance framework to operational data. This means:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Consistent Access Control:&lt;/STRONG&gt;&amp;nbsp;Policies defined for your lakehouse data automatically apply to Lakebase, ensuring uniform security and access management across both operational and analytical workloads.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Centralized Auditing and Lineage:&lt;/STRONG&gt;&amp;nbsp;Unity Catalog provides comprehensive auditing capabilities and data lineage tracking for Lakebase assets, simplifying compliance and offering transparent insights into data flows.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Simplified Security Management:&lt;/STRONG&gt;&amp;nbsp;By unifying governance, organizations can reduce the complexity of managing security policies across disparate systems, enhancing overall data security posture.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Robust Security and Data Protection&lt;/H3&gt;
&lt;P&gt;Lakebase is designed with enterprise-grade security in mind, leveraging existing cloud infrastructure and Databricks' security features:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Network Integration:&lt;/STRONG&gt;&amp;nbsp;It integrates seamlessly with cloud networking services (e.g., Azure Private Link) for secure, private connectivity.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Identity Management:&lt;/STRONG&gt;&amp;nbsp;Integration with enterprise identity providers (e.g., Microsoft Entra ID) ensures secure authentication and authorization.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Data Encryption:&lt;/STRONG&gt;&amp;nbsp;Data is encrypted at rest and in transit, protecting sensitive information throughout its lifecycle.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;High Availability and Disaster Recovery:&lt;/STRONG&gt;&amp;nbsp;As a fully managed service, Lakebase inherently provides features for high availability and point-in-time recovery, ensuring operational resilience.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Optimized Cost Efficiency&lt;/H3&gt;
&lt;P&gt;The architectural separation of compute and storage, coupled with advanced autoscaling capabilities, contributes to significant cost savings compared to traditional database architectures:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Pay-as-you-go Compute:&lt;/STRONG&gt;&amp;nbsp;With serverless and autoscaling compute, you only pay for the resources consumed during active processing, with the ability to scale down to zero when idle.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Low-Cost Storage:&lt;/STRONG&gt;&amp;nbsp;Leveraging economical cloud object storage for data persistence drastically reduces storage costs.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Reduced ETL Overhead:&lt;/STRONG&gt;&amp;nbsp;By eliminating the need for complex ETL pipelines between OLTP and OLAP, organizations save on infrastructure, development, and maintenance costs associated with data movement and transformation. This can lead to reported savings of 40-50% in many environments.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Lakebase in Action: Powering Real-Time Applications and AI Agents&lt;/H2&gt;
&lt;P&gt;Databricks Lakebase opens up new possibilities for building intelligent, data-driven applications that require both transactional capabilities and deep analytical insights. Its unified approach simplifies development and accelerates time-to-market for innovative solutions.&lt;/P&gt;
&lt;img /&gt;
&lt;H3&gt;Real-World Use Cases&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Personalized Recommendations:&lt;/STRONG&gt;&amp;nbsp;Build real-time recommendation engines that leverage fresh transactional data from Lakebase to provide immediate and highly relevant suggestions to users.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Customer Segmentation and Real-Time Updates:&lt;/STRONG&gt;&amp;nbsp;Maintain and update customer profiles and segments in real-time, enabling personalized experiences and targeted marketing campaigns.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Feature Stores for Machine Learning:&lt;/STRONG&gt;&amp;nbsp;Utilize Lakebase as a feature store to serve low-latency features to AI models, ensuring that predictions and decisions are based on the most current data.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Stateful AI Agents:&lt;/STRONG&gt;&amp;nbsp;Develop AI agents that can maintain conversational state and interact dynamically with users, using Lakebase as a reliable backend for transactional data.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Order Processing Systems:&lt;/STRONG&gt;&amp;nbsp;Implement operational applications that require high-frequency reads, writes, and updates, such as order management or inventory systems, directly on the Databricks platform.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Interactive Workflow Tools:&lt;/STRONG&gt;&amp;nbsp;Create interactive data applications and dashboards that allow users to both view analytical insights and perform transactional updates within the same environment.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;A Practical Code Snippet&lt;/H3&gt;
&lt;P&gt;Developing with Lakebase feels familiar due to its PostgreSQL compatibility. Here’s a simple example demonstrating basic CRUD (Create, Read, Update, Delete) operations within a Lakebase table:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;-- Create a schema for your application

CREATE SCHEMA app AUTHORIZATION CURRENT_USER;

-- Create a table to store session data for an AI agent

CREATE TABLE app.sessions (

  session_id UUID PRIMARY KEY,

  user_id TEXT NOT NULL,

  state JSONB NOT NULL,

  created_at TIMESTAMPTZ DEFAULT now(),

  updated_at TIMESTAMPTZ

);


-- Create an index to optimize queries on agent status

CREATE INDEX ON app.sessions ((state-&amp;gt;&amp;gt;'agentStatus'));



-- Insert a new session record

INSERT INTO app.sessions(session_id, user_id, state)

VALUES (gen_random_uuid(), 'u-123', '{"agentStatus":"active","score":0.82}');


-- Update an existing session's state

UPDATE app.sessions SET state = jsonb_set(state, '{score}', '0.91'::jsonb), updated_at = now()

WHERE user_id='u-123';


-- Query active sessions

SELECT user_id, state-&amp;gt;&amp;gt;'score' as current_score FROM app.sessions WHERE (state-&amp;gt;&amp;gt;'agentStatus') = 'active';&lt;/LI-CODE&gt;
&lt;P&gt;This SQL snippet showcases how developers can interact with Lakebase using standard PostgreSQL syntax, enabling rapid application development within the Databricks environment.&lt;/P&gt;
&lt;H2&gt;The Lakebase Advantage: Performance and Reliability&lt;/H2&gt;
&lt;P&gt;Beyond its unified architecture, Lakebase is engineered for predictable performance and robust reliability, essential for mission-critical operational applications.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;The radar chart above provides an opinionated comparison of Databricks Lakebase against traditional OLTP systems across several key attributes. Lakebase demonstrates superior performance predictability, dynamic scalability, cost efficiency, and ease of management, coupled with strong data governance due to its integration with Unity Catalog. Traditional OLTP systems, while effective for their specific purposes, often score lower in these cloud-native, unified data platform metrics.&lt;/P&gt;
&lt;H3&gt;Reliability Features for Business Continuity&lt;/H3&gt;
&lt;P&gt;Lakebase integrates several critical reliability features that ensure business continuity and data integrity:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Branching:&lt;/STRONG&gt;&amp;nbsp;This feature allows developers to create isolated, production-like environments for testing changes without affecting the main operational database. It promotes safer development practices and faster iteration cycles.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Instant Restore and Point-in-Time Recovery (PITR):&lt;/STRONG&gt;&amp;nbsp;In the event of data corruption or accidental deletion, Lakebase enables quick restoration to a previous state, minimizing downtime and ensuring data resilience.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;High Availability:&lt;/STRONG&gt;&amp;nbsp;As a managed service, Lakebase is designed for high availability, with automated failover mechanisms and robust infrastructure ensuring continuous operation.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Validation and Troubleshooting: Ensuring a Smooth Lakebase Experience&lt;/H2&gt;
&lt;P&gt;Successful implementation and ongoing operation of Databricks Lakebase rely on proper validation and an understanding of common troubleshooting steps. This section provides a framework for ensuring your Lakebase deployment meets performance and reliability expectations.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;div data-video-id="https://youtu.be/UQynsu6qklw?si=v-k54HKAoLxaBiLN/1777658794119" data-video-remote-vid="https://youtu.be/UQynsu6qklw?si=v-k54HKAoLxaBiLN/1777658794119" class="lia-video-container lia-media-is-center lia-media-size-large"&gt;&lt;iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FUQynsu6qklw%3Ffeature%3Doembed&amp;amp;display_name=YouTube&amp;amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DUQynsu6qklw&amp;amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FUQynsu6qklw%2Fhqdefault.jpg&amp;amp;type=text%2Fhtml&amp;amp;schema=youtube" allowfullscreen="" style="max-width: 100%"&gt;&lt;/iframe&gt;&lt;/div&gt;
&lt;P&gt;&lt;EM&gt;An introductory video to Lakebase, explaining its core functionality and benefits for data apps and AI agents.&lt;/EM&gt;&lt;/P&gt;
&lt;H3&gt;Key Validation Steps&lt;/H3&gt;
&lt;P&gt;After provisioning and configuring your Lakebase instance, it's crucial to perform a series of validation tests:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Connectivity Verification:&lt;/STRONG&gt;&amp;nbsp;Confirm successful connections from your applications or development tools (e.g., psql, JDBC clients) to the Lakebase instance. Ensure that Unity Catalog registration is visible and properly configured for governance.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Performance Baseline:&lt;/STRONG&gt;&amp;nbsp;Conduct baseline QPS (Queries Per Second) tests and monitor latency under expected load conditions. Validate that autoscaling events occur as anticipated and that performance targets are met.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Data Synchronization (CDC):&lt;/STRONG&gt;&amp;nbsp;Test the end-to-end data flow by inserting/updating records in Lakebase and verifying their timely appearance in Delta Lake tables via managed CDC. If reverse synchronization (Delta to Lakebase) is configured, validate that as well.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Governance and Security Checks:&lt;/STRONG&gt;&amp;nbsp;Confirm that Unity Catalog permissions are correctly enforced for Lakebase assets and that audit logs accurately reflect data access and modification events. Verify network security configurations (e.g., Private Link) are functioning as intended.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Common Troubleshooting Scenarios&lt;/H3&gt;
&lt;P&gt;While Lakebase is designed for stability, understanding potential issues and their resolutions is key to efficient operation:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table style="width: 1078px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Problem Area&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Symptom&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Potential Cause(s)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Troubleshooting Step(s)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Performance&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;High latency, slow queries, autoscaling not triggering as expected.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Inefficient queries, missing indexes, insufficient compute resources, working set exceeding memory.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Inspect query plans, add appropriate indexes, monitor CU utilization, review autoscaling logs, consider increasing initial compute capacity if persistently underperforming.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Data Sync (CDC)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Stale data in Delta Lake, sync job failures, data inconsistencies.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Incorrect Unity Catalog permissions, CDC configuration errors, network issues, regional feature limitations.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Verify Unity Catalog access for CDC process, check CDC job logs for errors, confirm network connectivity between Lakebase and Delta Lake, consult Databricks documentation for regional CDC availability.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Connectivity&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Unable to connect from application, authentication failures.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Incorrect connection strings, firewall rules blocking access, misconfigured private endpoints, invalid credentials/tokens.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Double-check connection parameters, review network security group (NSG) and firewall rules, validate Private Link configuration, ensure correct user/service principal credentials.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Governance&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Unauthorized access, unexpected data visibility, audit log discrepancies.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Incorrect Unity Catalog access policies, schema mismatches, misconfigured external locations.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Review and refine Unity Catalog grants on Lakebase catalogs and schemas, verify external location configurations, ensure consistent data object naming conventions.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Feature Limitations&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Specific PostgreSQL features or extensions not working.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Managed environment restrictions, unsupported extensions.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Consult Databricks documentation for supported PostgreSQL versions and extensions in Lakebase. Adapt application logic to use supported alternatives if necessary.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;By proactively monitoring and understanding these aspects, Cloud Solution Architects can ensure robust and efficient operation of Lakebase within their Databricks ecosystem.&lt;/P&gt;
&lt;H2&gt;Conclusion&lt;/H2&gt;
&lt;P&gt;Databricks Lakebase represents a pivotal advancement in data architecture, fundamentally reshaping how organizations approach operational and analytical workloads. By seamlessly integrating a fully managed PostgreSQL OLTP engine directly into the Databricks Data Intelligence Platform, Lakebase addresses the long-standing challenge of data fragmentation. This unification not only simplifies complex ETL processes and reduces operational overhead but also extends robust governance and security through Unity Catalog across the entire data estate. The innovative separation of compute and storage, coupled with intelligent autoscaling, delivers unparalleled cost efficiency and dynamic performance. For Cloud Solution Architects, Lakebase offers a compelling path to building scalable, real-time applications and sophisticated AI agents, leveraging fresh transactional data alongside comprehensive analytical insights—all within a single, consistent, and highly performant environment. This strategic evolution of the lakehouse architecture empowers enterprises to unlock new levels of agility, innovation, and data-driven decision-making.&lt;/P&gt;</description>
      <pubDate>Fri, 01 May 2026 18:23:22 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-mission-critical-blog/databricks-lakebase-the-operational-database-for-ai-agents-and/ba-p/4516497</guid>
      <dc:creator>anishekkamal</dc:creator>
      <dc:date>2026-05-01T18:23:22Z</dc:date>
    </item>
  </channel>
</rss>

