governance
6 TopicsThe Agent Era Has Already Arrived in Healthcare. Are You Ready to Govern It?
Start here. Answer honestly. Right now, how many AI agents are running inside your organization? Who built them? Which patient data, claims information, or proprietary research are they configured to access? If your CISO walked into your office tomorrow and asked for a complete inventory of every agent in your enterprise, including each one's owner, the systems it is permitted to access, and the policies that govern how it operates, could you produce that inventory before lunch? When the analyst who built that clinical summarization agent moves to a new role next quarter, what happens to the agent? Does its access continue? Does anyone notice? If a regulator opened an audit tomorrow, could you prove that every AI agent operating in your environment is subject to the same lifecycle controls, identity standards, and data protection policies you apply to your human workforce? Could you disable a compromised agent enterprise-wide with a single click, the same way you would revoke a lost access credential? If those questions made you hesitate, you are not alone. Almost no healthcare or life sciences organization can answer them confidently today. And that gap is exactly where the next decade of risk, and the next decade of competitive advantage, will be decided. The quiet crisis nobody talks about yet Healthcare and life sciences leaders are caught in a paradox. You need AI to survive the operational pressures squeezing your organization from every direction. Physician burnout is at crisis levels, with 45.2% of US physicians reporting symptoms in recent Mayo Clinic research. Revenue cycle complexity continues to climb, and McKinsey now estimates that the cost to collect consumes 30 to 60 percent of net patient revenue at many provider organizations. Prior authorization backlogs delay care. Clinical trial timelines stretch into years. Documentation burden eats hours that belong to patients. So you started piloting Microsoft 365 Copilot. You experimented with agents in Copilot Studio. Maybe a clinical team built an agent to draft discharge summaries. A revenue cycle group spun up an agent to triage denials. A medical affairs team built one to comb through literature. Each one delivered value. Each one was approved on its own merits. And then a quiet thing happened. You lost track of how many agents you have. According to KPMG's AI Quarterly Pulse Survey, 88 percent of organizations are now exploring or piloting AI agents. IDC projects that 1.3 billion agents will be in operation by 2028. Inside your own walls, the number is climbing fast. Each new agent is a digital identity that authenticates into your environment, accesses your data, and executes work on behalf of your business. Most have no formal owner. Most have no documented access scope. Most have no decommissioning plan. Most have never been reviewed by Compliance. Microsoft's 2024 Data Security Index found that 84 percent of organizations lack confidence in their AI data security posture, and 40 percent have already experienced an AI related data security incident. That is not a future problem. That is a now problem. If shadow IT was the defining governance challenge of the last decade, agent sprawl is the defining challenge of this one. And in healthcare and life sciences, where ePHI, member PII, and proprietary clinical trial data are at stake, the consequences are not theoretical. They are existential. The reframe that changes everything Here is the counterintuitive truth that separates HLS organizations that scale AI from those stuck in pilot purgatory. Governance is not the brake on AI adoption. Governance is the accelerator. When security, identity, and agent oversight are engineered in from day one, your teams stop tiptoeing. They build with confidence because the guardrails are real. They expand into clinical use cases because Compliance trusts the foundation. They scale wall-to-wall because IT can prove every agent is accounted for. The organizations that lead with trust end up moving faster in the long run, not slower. This is the bet behind Microsoft Agent 365 and Microsoft 365 E7. What Agent 365 and Microsoft 365 E7 actually are Microsoft 365 E7, announced March 6, 2026 and now generally available, is the Frontier Suite. It is Microsoft's answer to a single question that every healthcare CIO, CISO, and COO is wrestling with: how do you run AI safely, at scale, across an entire organization? E7 is not another SKU on top of your existing stack. It is one cohesive platform that brings together four essential capabilities: Microsoft 365 E5 for your enterprise productivity, collaboration, and security foundation, including Microsoft Defender, Microsoft Purview, and Microsoft Intune. Microsoft 365 Copilot for AI grounded in your organizational data through Work IQ, embedded in the flow of work for clinicians, researchers, operations teams, and administrators. Microsoft Entra Suite for identity governance, Conditional Access, and Zero Trust network access, extended consistently across users, applications, and AI agents. Microsoft Agent 365 as the centralized control plane to observe, govern, and secure every AI agent, whether built by Microsoft, your internal teams, or external partners. Agent 365 is also available as a standalone capability. But the magic happens when it works alongside the rest of E7, because that is where AI, identity, security, and governance stop being separate disciplines and become one operating system for the agentic era. The mental model that unlocks everything: agents are first-class digital identities Here is the simplest way to understand what Agent 365 does. Microsoft 365 governs your enterprise identities. Agent 365 governs your agent identities. The same control plane disciplines apply to both. Think about the rigor you apply to any privileged identity in your environment, whether a service account, an API integration, or a third-party application connector. You issue it a unique identity in Microsoft Entra. You assign a human owner who is accountable. You scope its access to least privilege. You apply DLP, sensitivity labels, and Conditional Access. You monitor for anomalous behavior. You have a documented decommissioning path. Identities that no one watches over become identities that get exploited. Now ask yourself how the last AI agent in your environment was created. The honest answer at most organizations: someone opened Copilot Studio, pointed it at a SharePoint library of clinical protocols, gave it a name, and moved on. No documented owner. No access review. No retirement plan. Compliance was never consulted. You would never stand up a privileged service account that way. Yet that is exactly how most organizations are standing up the fastest-growing class of digital identities in their environment. Agent 365 closes that gap by extending the identity, security, and lifecycle controls you already trust for users and applications so they apply with the same rigor to AI agents. Every agent receives a unique Entra Agent ID, a first-class identity in Azure AD with the same governance primitives as any other privileged identity. Every agent has a designated human owner who is accountable for its scope and behavior. Access is granted explicitly through Conditional Access and policy templates, so each agent operates only against the resources its purpose requires. Microsoft Purview DLP and sensitivity labels govern which data the agent is permitted to read, generate, or share. Microsoft Defender monitors agent activity for anomalies and surfaces alerts the same way it does for any other identity-driven risk. Lifecycle rules flag or auto-retire agents that are dormant, orphaned, or risky, eliminating the unowned automations that quietly accumulate in every enterprise. This is not metaphor. It is the actual architecture. The fastest path to governing agents is to extend the identity infrastructure you already trust. The three pillars of Agent 365: Observe, Govern, Secure Pillar 1: Observe. Know what is actually happening. You cannot govern what you cannot see. The first job of Agent 365 is to give you complete, continuous visibility into every AI agent operating in your environment. The Agent Registry is the single authoritative inventory of every agent, whether built by Microsoft, custom developed by your team, deployed by a partner, or discovered as a shadow agent operating without oversight. Each entry shows the owner, purpose, capabilities, lifecycle status, and business context. Agent Analytics tracks adoption, quality, performance, and business impact. Agent Map visualizes how agents connect with other agents, people, tools, and data sources, surfacing dependencies and risk concentrations you would never spot in a spreadsheet. Real time monitoring flows directly into Microsoft Defender, so unusual agent behavior generates alerts the same way unusual user behavior does today. For a health system CISO, that means finally being able to answer the question: which agents are touching ePHI, and is every one of them authorized? For a life sciences compliance officer, it means audit ready visibility into every AI system operating across R&D, regulatory affairs, and commercial. For a payer operations leader, it means knowing which claims processing agents are actually delivering accuracy and throughput, and which are quietly underperforming. Pillar 2: Govern. Set the rules. Control the lifecycle. Visibility is the start. Control is what turns visibility into outcomes. Agent 365 ensures that every agent is approved, compliant, and accountable from creation through retirement. IT led onboarding workflows make sure each agent launches with the right identity, access, and ownership before it ever touches data. Policy templates enforce data handling, permission, and usage rules consistently from day one through Defender, Entra, and Purview. Rules based agent management gives admins an automated If This Then That interface. If an agent is unused for 90 days, auto retire it. If an agent is flagged as risky, block it and alert the security operations team. No human in the loop required for the routine cases, full alerting and override for the exceptions. Ownership enforcement requires every agent to have a designated human owner. When that owner leaves the organization, the platform flags the orphaned agent for bulk reassignment, so nothing operates without clear accountability. The Tools Gateway brokers and audits tool access for agents, enabling least privilege at the action level, not just the identity level. For HLS specifically, that translates to outcomes you can take to your board. A hospital CIO can ensure any agent touching Epic or Cerner goes through standardized approval. A pharma IT director can enforce that clinical trial matching agents only touch de identified data unless elevated permissions are explicitly granted and documented. A payer compliance team can automatically retire agents tied to a completed open enrollment campaign instead of letting them silently expand the attack surface. Pillar 3: Secure. Protect agents and data with the stack you already trust. The final pillar is what makes Agent 365 production grade for healthcare and life sciences. Security and compliance are not bolted on. They are the same proven Microsoft security stack you already run for your users, extended natively to agents. Microsoft Purview, your data security and compliance backbone: Data Security Posture Management for AI gives visibility into how agents interact with sensitive data and detects risky usage patterns. Data Loss Prevention stops agents from accessing or processing files labeled Highly Confidential, even when a user prompts them to. Sensitivity labels are inherited automatically by agent outputs, governing how data is viewed, extracted, or shared downstream. Insider Risk Management detects risky behavior by users interacting with agents, such as unusual prompt patterns or excessive access to sensitive data. Communication Compliance monitors AI driven interactions for regulatory or ethical violations and unauthorized disclosures. eDiscovery and Audit logs every agent interaction, giving legal, compliance, and IT teams the transparency required for HIPAA, GDPR, and FDA 21 CFR Part 11. Oversharing Assessments run weekly checks for sensitive data exposure across SharePoint sites and agent access patterns. Microsoft Entra, your identity control plane: Entra Agent ID gives every agent a unique identity in Azure AD, so Conditional Access, role based access, and risk based policies apply individually. Conditional Access for agents enforces policies like only allow this prior authorization agent to access claims data from approved devices and locations during business hours. Identity Governance provides access packages for agents with reduced scope permissions and least privilege defaults. Block at Scale lets you instantly disable all high-risk agents from Entra in a single action. Microsoft Defender, your threat protection layer: Security Posture Management identifies and remediates agent misconfigurations, such as agents running with no authentication. Threat Detection and Blocking monitors suspicious agent activity, generates alerts, and blocks unauthorized tool invocations. Threat Investigation and Hunting collects unified agent observability logs so SOC teams can forensically trace every action an agent took. One Click Kill Switch instantly disables any agent and surfaces the complete audit trail of every action it took before being stopped. For a hospital security operations team, that means the same DLP policies protecting patient records in email and Teams now protect agents that summarize clinical notes. For a life sciences data protection officer, it means agents accessing proprietary compound data respect the same sensitivity labels as human researchers. For a payer CISO, it means an anomalous claims agent can be killed in seconds, with a complete forensic record of every member record it touched. Why this only works as an integrated platform Individual capabilities are useful. Integration is what makes them transformative. Here is the contrast HLS leaders feel today versus what changes the moment E7 lights up. Without an integrated platform, you operate with: Fragmented tools for identity, security, compliance, and AI, each with its own console and its own gaps. No centralized agent inventory, forcing your IT and security teams to track bots and automations in spreadsheets. Inconsistent policy enforcement across agents, creating compliance gaps every audit team will eventually find. Blind spots where agents access data, invoke tools, or interact with other agents without any oversight. Manual triage when an incident hits, because nothing connects user identity, agent identity, and data classification in one view. With Microsoft 365 E7, you gain: A Unified Agent Registry providing a single source of truth for every agent, whether Microsoft built, custom developed, partner deployed, or shadow discovered. Entra Agent ID giving each agent a unique identity, so Conditional Access, role based access, and risk based policies apply at the individual agent level. Full lifecycle governance with standardized onboarding, periodic review, ownership transfers, auto retirement of dormant agents, and structured offboarding. Policy by design, where Purview DLP, sensitivity labels, and compliance rules extend to all agent interactions through pre built templates applied consistently from day one. One click disable to instantly freeze any agent, with Defender threat detection extended to agents and full audit trails for forensic investigation. Expanded threat coverage that addresses agent sprawl, overprivileged access, tool misuse, misconfiguration, and inter agent risk patterns no legacy tool was designed to see. Shared registry and controls that let IT, Security, and Compliance reference the same authoritative inventory across Defender, Entra, and Purview, eliminating the silos that slow incident response. This is the reason E7 exists as a platform, not a bundle. AI, identity, security, and governance stop being separate disciplines and start operating as one system. What this is actually worth: the Forrester numbers Microsoft commissioned Forrester to conduct a Total Economic Impact study of Microsoft 365 Copilot, published in March 2025. The composite organization in that study, modeled on real customer interviews, achieved: 132 percent three-year ROI with payback in under one year. 9 hours saved per Copilot user per month through automation of routine work like drafting, summarizing, and analysis. Up to 2.6 percent top line revenue lift through better qualified opportunities, improved win rates, and stronger retention in customer facing teams. 25 percent acceleration in new employee onboarding as new hires ramp faster on summarized institutional knowledge. Those are the verified numbers. The bigger story for HLS is what they look like when applied to clinical, claims, and research workflows where every reclaimed hour is an hour that goes back to patients, members, or science. AI is already defending AI The same agentic capabilities transforming clinical and operational workflows are now embedded in your security stack. Microsoft Security Copilot agents work alongside human analysts inside Defender, Entra, Purview, and Intune, accelerating threat response and absorbing the manual load that today drowns most security operations teams. Independent benchmarks back the impact. In a 162 admin randomized study published in 2025, the Conditional Access Optimization Agent in Microsoft Entra completed configuration tasks 43 percent faster and produced 48 percent more accurate Conditional Access policies than admins working without it. Security triage, alert investigation, and identity hygiene are following the same trajectory. For HLS security teams already stretched thin, that is hours reclaimed every week to focus on the threats that actually matter, with the same Agent 365 governance applying to the security agents themselves. The defenders are governed by the same rules as the workforce they defend. How HLS organizations are putting Agent 365 to work Here is how the value shows up across the three biggest HLS segments. For providers: reclaiming time for care The challenge: clinicians spend more time on documentation than on patients. Care coordination is fragmented. Burnout is gutting retention. The strategy: deploy agents that absorb administrative load while Agent 365 ensures every one of them respects ePHI boundaries. Clinical documentation agents integrated with Microsoft Dragon Copilot structure dictation against EHR requirements, apply billing codes, and flag missing elements before submission. Care coordination agents generate care plans, allocate tasks, and surface relevant patient context during multidisciplinary rounds, optimized for HL7 FHIR interoperability. Patient intake and scheduling agents built in Copilot Studio handle appointment booking, reminders, eligibility verification, and referral management. Handoff and shift summary agents pull from multiple systems to generate complete handoff summaries for nurses and physicians transitioning between shifts, reducing communication gaps that drive adverse events. The aha moment: applied across a 10,000 employee health system, nine hours per user per month is more than one million reclaimed hours a year. That is the equivalent of hundreds of full time clinicians, returned to direct patient care, with every agent governed under the same Conditional Access and DLP policies your IT team already manages today. For payers: transforming revenue cycle and member experience The challenge: prior auth backlogs delay care. Denial rates climb. Member services teams drown in volume. The strategy: agentic AI rewires the most expensive, most manual workflows in your operation while Agent 365 keeps every agent inside the lines on member PII. Prior authorization agents autonomously gather clinical documentation, cross reference medical policy, determine approval criteria, and route decisions, accelerating turnaround from days to hours. Claims processing agents automate billing and denial management. With cost to collect running 30 to 60 percent of net patient revenue at many organizations, even modest automation produces material margin recovery. Denial resolution and appeals agents analyze denial patterns, surface root causes, generate appeal documentation, and track success rates over time, turning a cost center into a continuous improvement engine. Member services agents integrated with Microsoft 365 Copilot Chat handle benefits inquiries, claims status, and self service triage, deflecting call volume and improving first contact resolution. Fraud detection and risk adjustment agents scan claims data for anomalies and optimize coding accuracy for Medicare Advantage and ACA populations. The aha moment: a payer CISO can disable an anomalous prior auth agent in one click and produce a complete forensic record of every member record it accessed, while Compliance simultaneously confirms the agent never violated DLP. That is regulatory readiness that legacy automation cannot deliver. For life sciences and pharma: accelerating discovery and commercialization The challenge: clinical trials take years. Regulatory submissions consume teams. Medical affairs cannot keep up with literature volume. The strategy: orchestrate agents across R&D, regulatory, medical, and commercial, with Agent 365 enforcing the data classification rules that proprietary IP and clinical data demand. Clinical trial matching agents scan patient profiles and eligibility criteria to surface trial opportunities, accelerating recruitment. Regulatory document preparation agents assemble submissions, cross reference data across modules, and ensure consistency in FDA, EMA, and global filings. Medical research and literature review agents powered by Microsoft GraphRAG retrieve research backed insights with verified source references, giving medical science liaisons trustworthy synthesis on demand. Pharmacovigilance agents monitor safety databases, flag potential adverse events, and generate timely case reports. Commercial insights and launch planning agents synthesize market data, payer policy, and HCP sentiment for sharper launch and field strategy. The aha moment: cutting even three months off a regulatory cycle on a single high revenue product can mean tens of millions in additional sales, while Purview sensitivity labels guarantee every agent accessing proprietary compound data respects the same data classification as your senior researchers. A phased path that actually works in regulated industries In regulated industries, a big bang AI rollout is a recipe for incidents. The HLS organizations getting this right are following a five-phase pattern that builds expertise and validates governance before scale. Establish. Form a cross-functional champion team across IT, Compliance, Clinical Operations, and Research. Define what risks you are mitigating and what outcomes you are unlocking. Inventory the agents already in flight. Configure. Stand up identity, DLP, and policy templates in Microsoft 365 Admin Center, Power Platform Admin Center, and Microsoft Purview. Enforce that any agent handling PHI runs in a secure environment with audit logging on by default. Pilot. Choose a small group of makers in a controlled environment. Start with non-critical workflows like internal reporting or scheduling before moving to clinical or member facing use cases. Run weekly reviews with Compliance and Security. Empower. Launch role specific training for clinicians, researchers, makers, and IT. Stand up a Center of Excellence to provide templates, best practices, and reusable patterns. Promote success stories internally to build momentum. Scale. Expand agent development across departments with governance as a guardrail, not a gate. Use pay as you go metering to track usage and optimize licensing. Refine policies continuously based on Purview signals and audit results. The strategic insight: organizations that lead with governance reach scale faster than those that lead with experimentation. Trust is the unlock, not the obstacle. Governance is a team sport Here is the pattern we see again and again. The HLS organizations that succeed with AI at scale are not the ones with the smartest IT shop or the boldest Compliance officer. They are the ones whose IT, Security, Compliance, Clinical, Research, and Operations leaders sit at the same table on agent strategy from week one. Agent 365 was designed for that table. The Agent Registry is the shared truth. Purview policies satisfy your Compliance officer. Entra controls reassure your CISO. The lifecycle workflows give your CIO confidence. The clinical and research outcomes give your COO and Chief Medical Officer the business case. Everyone gets the view they need from the same single source. Stand up an agent governance council. Meet every two weeks. Use the Agent Registry as your standing agenda. Make decisions in plain sight. The organizations that do this consistently outperform on both speed and safety. The ones that try to keep AI inside a single function fall behind on both. Who contributes what Think back to the mental model. You would never let a single function authorize, configure, and oversee a new privileged system on its own, not when it touches ePHI, claims, or proprietary research. Security, IT, Compliance, Clinical, and the relevant business owner all weigh in because the stakes are too high for any one seat to carry alone. Agent governance demands the same multidisciplinary scrutiny, and the council is where that happens. Each seat brings something the others cannot. CIO. Owns the agent strategy and the platform investment. Translates board-level AI ambition into an operating model the rest of the organization can execute against. CISO and Security Operations. Define agent identity standards, Conditional Access policies, and incident response playbooks. Without this seat, an anomalous agent touching ePHI becomes a breach instead of a contained event. Chief Compliance Officer and Privacy. Translate HIPAA, GDPR, FDA 21 CFR Part 11, and state regulations into Purview policies and audit requirements. This is the seat that keeps you out of an OCR investigation or a 483 letter. Chief Medical Officer and Clinical Operations. Validate that clinical agents are safe, accurate, and aligned with care standards. Own the clinical risk review for any agent that touches patient care, the same way you would for a new clinical protocol. Chief Research Officer or Head of R&D. Govern how agents interact with proprietary trial data, compound libraries, and scientific IP. The seat that protects the next decade of pipeline value. COO and Revenue Cycle Leadership. Prioritize the operational workflows where agents will move the needle on cost to collect, denial rates, and throughput, and own the business outcomes that justify the investment. Center of Excellence Lead. Maintains templates, reusable patterns, and maker enablement. Turns every council decision into a guardrail builders can actually use the next morning. Frontline champions. Clinicians, claims specialists, and researchers who pilot, give feedback, and carry credibility back to their peers. The seat that decides whether agents get adopted or quietly ignored. When every one of these voices is in the room, your governance council operates like a tumor board for AI. Different lenses, one shared decision, full accountability. That is how regulated industries make complex calls safely, and it is exactly the muscle Agent 365 was built to support. Seven questions to bring to your next leadership meeting If you want to know whether your organization is ready, run through these together. The places you hesitate are exactly where Agent 365 and E7 deliver the most value. Visibility. Do you know which AI agents, bots, and automations are running in your environment today, who built them, what they have access to, and whether they are still needed? Control. If someone on your team builds a new AI agent tomorrow, what is the actual process to make sure it is approved and secured? Or could they deploy it with wide open access? Security. What prevents an AI agent from reading or transmitting patient data it should not? Do you have a way to detect and stop a rogue or compromised agent? Accountability. Who owns the outputs of an AI agent's actions? What is the offboarding process when the agent or its creator leaves? Scale. Six months from now, you may have a hundred agents deployed across departments. Are your oversight and compliance structures ready for that volume? Cross-functional alignment. How are your IT, Security, and Compliance teams partnering on AI today? Governance is a team sport. Data readiness. How confident are you that your data estate is clean, labeled, and governed well enough for AI to surface accurate answers and not outdated or conflicting information? If you hesitated on even one of those, you have just identified where Agent 365 and Microsoft 365 E7 will pay for themselves the fastest. The path forward Here is the honest truth. The healthcare and life sciences organizations that lead in the next decade will not be the ones that adopted AI first. They will be the ones that adopted AI safely, compliantly, and at scale, with intelligence and trust woven into every layer. Microsoft Agent 365 and Microsoft 365 E7 give you the only integrated platform that brings AI, identity, security, and governance into one cohesive system, running in the flow of work you already use. This is not about adding another tool to your stack. It is about extending the investments you have already made in Microsoft 365, Entra, Defender, and Purview to cover the fastest-growing class of digital identities in your environment. The agent era has already arrived. The question is whether you will govern it with confidence or chase it with anxiety. We would love to help you lead. Take the next step Explore Microsoft Agent 365: The Control Plane for Agents Microsoft Entra Agent ID: aka.ms/EntraAgentID Learn more about Microsoft 365 E7, the Frontier Suite: Introducing Microsoft 365 E7 See Microsoft 365 Copilot in action: Microsoft 365 Copilot Read the Forrester TEI study: The Total Economic Impact of Microsoft 365 CopilotMicrosoft Purview: Comprehensive solutions for data governance, protection, compliance & management.
Microsoft Purview provides a unified data governance solution to help manage and govern your on-premises, multicloud, and software as a service (SaaS) data, Office Apps, Microsoft Office 365 services, Devices and Cloud Apps. Easily create a holistic, up-to-date map of your data landscape with automated data discovery, sensitive data classification, and end-to-end data lineage. Enable data consumers to access valuable, trustworthy data management.Azure DevOps - Leveraging Pipeline Decorators for Custom Process Automation
Introduction Background In the recent pandemic, health institutions all across the world have been pushed to their limits on about every facet. Through this, many such institutions have begun to reprioritize their modernization efforts around their cloud infrastructure to support increasing demands and hedge against uncertainty. As institutions are migrating their existing workloads into the cloud, a common challenge they are faced with is that many of their on-prem security processes and standards tend to not map one-to-one with the services they are being migrated to. With the sensitive nature of the healthcare industry, it is especially important to solution feasible routes to always ensure security and validation is in place end-to-end. In this blog post, we will look at how Azure DevOps Pipeline Decorators can be leveraged to bridge the gap in our cloud environment with the customer's existing security processes on their on-premises IIS server. What are Pipeline Decorators? If you have ever run across jobs executing on your azure pipelines that you have not previously defined, there is a good chance you may have already run into decorators before! Pipeline decorators allow you to program jobs to execute before or after any pipeline runs across your entire Azure DevOps organization. For scenarios such as running a virus scan before every pipeline job, or any sort of automated steps to assist with governance of your CICD processes, pipeline decorators grants you the ability to impose your will at any scale within Azure DevOps. Read further on decorators on Microsoft Learn: Pipeline decorators - Azure DevOps | Microsoft Learn In this blog post, I will be walking through a sample process based on the customer scenario’s requirements, and how the pipeline decorators can fit in to assist with their governance objectives. Scenario Customer’s Azure DevOps organization has grown to a considerable size composed of numerous projects with various applications with no clearly defined process or standards they adhere to. All of these applications have been hosted on an on-premises IIS server, where the application teams are trusted to provide manual inputs to deployment variables. Due to the lack of out-of-the-box controls for validating IIS file path permissions with Azure Active Directory identities within Azure DevOps, this was an area of concern with the customer as the deployed production applications effectively did not have any preventative measures to address malicious actors or human error overwriting existing applications. When looking at the deployment tasks to IIS servers from Azure DevOps, the two primary variables the customer was looking to control were: virtualAppName - Name of an existing an already existing virtual application on the target machines websiteName - Name of an existing website on the machine group Considering the RBAC strategy the customer has in mind with AAD, there will be a third variable to represent the ownership of the application via an AAD group. groupId - AAD ID of the application owner’s group In the next section, I will outline a high-level process proposal based on these three variables, that goes into onboarding applications. Solutioning High-Level Process Proposal for Onboarding New Applications For this demo’s purposes, we will make the following assumptions to build out a process that illustrates how application teams can successfully onboard and assist the operations team in successfully managing the application environment within their on-prem IIS server. Assumptions Ops team only require the following three parameters to help govern application deployments: virtualAppName groupId websiteName Application teams only need flexibility while building applications within the CICD pipelines, and currently do not have much concerns or the expertise to manage deployments. Ops team wishes to also build security around these parameters such that only the authorized actors will be able to modify these values. Onboarding New Applications Ops team provides a template (such as GitHub issues templates) for new application requests to the application teams, and captures the following IIS deployment-specific information: virtualAppName groupId websiteName For this demo, I have created a simple GitHub issues YAML form which the operations team can leverage to capture basic information from the application teams, which can also be tied to automation to further reduce operational overhead: Ops team is then notified of the request, and upon successful validation continues to provision an Application Environment with the captured information application environment in this context involves the following components: Key Vault (per application) Service Connection to application Key Vault with read permissions over secrets Place the application team provided, ops team validated virtualAppName , groupId , websiteName values as secrets Place Service Connection details in the project variable group to allow for the decorator to dynamically retrieve secrets for each project Application registered onto the IIS server that adheres to existing IIS server file management strategies Once the environment is ready for use, notify the application teams by updating the issue template and now the application teams only need to focus on building and publishing their artifact within their CICD pipelines Updating Existing Applications Ops team provides a template for change requests to the application teams, and captures the following information: virtualAppName groupId websiteName Change Justification/Description Core Ops team reviews and approves the change request Update the application environment accordingly Notify the application team Now with the high-level process defined, we will now look at how we could bring in the relevant parameters into the decorators to impose validation logic. Building the Demo Setting up our Demo Application Environment In this example, I created a key vault named kv-demolocaldev , and placed the virtualAppName , groupId , and websiteName so we may retrieve the values later as shown below: Now, we must create the project and subsequently create the service connection to the key vault scoped to the project. To do this, I created an Azure Resource Manager Service Connection while using my demo identity, that is scoped to the resource group containing the key vault: Once the service connection is done provisioning, you can navigate to the AAD object by following the Manage Service Principal link, which will allow you to retrieve the Application ID to be used when adding the access policy. Selecting the Manage Service Principal link will take us to the AAD object, where we can find the Azure Application ID to add to our Key Vault access policy. The service connection will only need GET secret permissions on its access policy. Afterwards, we now capture the information about the service connection and key vault by creating a variable group on the application's Azure DevOps project named demo-connection-details : There will need to be additional steps taken to provision the IIS server as well with the parameters, but for this demo's purpose we will assume that the provisioning steps have already been taken care of. Now with this, we can move onto building out our decorators. Building the Decorators For the pipeline side, the customer is looking to control both the pre-build with validating the input variables, and post-build in placing guardrails around deployment configurations with the validated parameters. Both pre and post decorators will leverage the same key vault secrets, so we will start with integrating the key vault secrets into the YAML definition. Pipeline decorators leverage the same YAML schema as the YAML build pipelines used within Azure DevOps. Meaning we can take advantage of conditional logic with repo branches, dynamic variables, and pull in key vault secrets with service connections. The high-level logic we are attempting to demonstrate for the pre and post decorators are the following: Pre: Check for variables/conditions to bypass decorators Using pre-established variables, connect to application’s Azure Key vault and retrieve secret values For each of the deployment variables, process custom validation logic Post: Deploy the application/artifact to the IIS server You can find the demo files within the following repo: https://github.com/JLee794-Sandbox/ADO-Decorators-PoC Pre-build decorator To ensure users can opt-out of the process during development, we can leverage the same YAML schema as build pipelines to construct our conditionals. Check for variables/condition to bypass decorators In the pre-build decorator YAML definition (located in Build/Pre/input-parameter-decorator.yml ), for pipeline builds that run off the main branch, that also checks for a simple variable flag named testDecorator to be true for the decorator to execute. steps: - ${{ if and(eq(variables['Build.SourceBranchName'], 'main'), contains(variables['testDecorator'],'true') ) }}: Following right after, I retrieve websiteName , groupId , and virtualAppName with the connection details we have placed within the demo-connection-details , which will be passed in by the build pipeline. - task: AzureKeyVault@2 displayName: '[PRE BUILD DECORATOR] Accessing Decorator Params from the key vault - $(decorator_keyvault_name), using $(decorator_keyvault_connection_name) connection.' inputs: azureSubscription: $(decorator_keyvault_connection_name) # Service Connection Name (scoped to RG) KeyVaultName: $(decorator_keyvault_name) # Key Vault Name SecretsFilter: 'websiteName,groupId,virtualAppName' # Secret names to retrieve from Key Vault RunAsPreJob: true Now that the secrets have been pulled in, we can now run our custom validation logic for each. For the purpose of this demo, we will just check that each variable exists and throw an error through a simple PowerShell script. - task: PowerShell@2 name: ValidateDeploymentVariables displayName: '[PRE BUILD DECORATOR] Validate Deployment Variables (Injected via Decorator)' inputs: targetType: 'inline' script: | $errorArr = @() try { Write-Host "VirtualAppName: $(virtualAppName)" # your input test cases go here # e.g querying the remote-machine to match the virtualAppName } catch { errorArr += 'virtualAppName' Write-Host "##vso[task.logissue type=error]Input parameter 'virtualAppName' failed validation tests." } try { Write-Host "GroupID: $(groupId)" # your input test cases go here # e.g querying the remote-machine to match the groupId against the local file permissions } catch { Write-Host "##vso[task.logissue type=error]Input parameter 'groupId' failed validation tests." errorArr += 'GroupID' } try { Write-Host "WebSiteName: $(webSiteName)" # your input test cases go here # e.g querying the web-site URL to see if site already exists, etc. } catch { Write-Host "##vso[task.logissue type=error]Input parameter 'webSiteName' failed validation tests." errorArr += 'GroupID' } if ($errorArr.count -gt 0) { # Link to your teams documentation for further explanation Write-Warning -Message "Please provide valid parameters for the following variables: $($errorArr.join(', '))" Write-Warning -Message "See <https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch> for additional details" throw "Please provide valid values for $($errorArr.join(', '))." } And we are done with the pre-build decorator! Of course, while developing it is important to iteratively test your code. If you would like to publish your code now, skip to the (Publish your extension) section below. Post-build decorator For our post-build decorator, all we want to do is determine when the decorator should run, and simply invoke a deployment task such as the IISWebAppDeploymentOnMachineGroup task. Of course, there are many more validation steps and tools you can place here to further control your deployment process, but for the sake of this demo we will just be outputting some placeholder messages: steps: - task: PowerShell@2 name: DeployToIIS displayName: Deploy to IIS (Injected via Decorator) condition: | and ( eq(variables['Build.SourceBranch'], 'refs/heads/main'), eq(variables.testDecorator, 'true') ) inputs: targetType: 'inline' script: | # Validation steps to check if IIS # Validation steps to check if iOS or Android # > execute deployment accordingly Write-Host @" Your IIS Web Deploy Task can look like this: - task: IISWebAppDeploymentOnMachineGroup@ inputs: webSiteName: $(webSiteName) virtualApplication: $(virtualAppName) package: '$(System.DefaultWorkingDirectory)\\**\\*.zip' # Optionally, you can parameterize this as well. setParametersFile: # Optional removeAdditionalFilesFlag: false # Optional excludeFilesFromAppDataFlag: false # Optional takeAppOfflineFlag: false # Optional additionalArguments: # Optional xmlTransformation: # Optional xmlVariableSubstitution: # Optional jSONFiles: # Optional "@ Publishing the Extension to Share with our ADO Organization First, we need to construct a manifest for the pipeline decorators to publish them to the private Visual Studio marketplace so that we may start using and testing the code. In the demo directory, under Build we have both Pre and Post directories, where we see a file named vss-extension.json on each. We won’t go into too much of the details around the manifest file here today, but the manifest file allows us to configure how the pipeline decorator executes, and for what sort of target. Read more on manifest files: Pipeline decorators - Azure DevOps | Microsoft Learn With the manifest file configured, we can now publish to the marketplace and share it with our ADO organization: Create publisher on the Marketplace management portal Install tfx command line tool npm install -g tfx-cli Navigate to the directory containing the vss-extension.json Generate the .vsix file through tfx extension create > tfx extension create --rev-version TFS Cross Platform Command Line Interface v0.11.0 Copyright Microsoft Corporation === Completed operation: create extension === - VSIX: /mnt/c/Users/jinle/Documents/Tools/ADO-Decorator-Demo/Build/Pre/Jinle-SandboxExtensions.jinlesampledecoratorspre-1.0.0.vsix - Extension ID: jinlesampledecoratorspre - Extension Version: 1.0.0 - Publisher: Jinle-SandboxExtensions Upload the extension via the Marketplace management portal or through tfx extension publish Share your extension with your ADO Organization on the management portal Install the extension on your ADO Organization Organization Settings > Manage Extensions > Shared > Install Testing the Decorator Now that your pipeline decorators are installed in your organization, any time you push an update to the Visual Studio marketplace to update your extensions, your organization will automatically get the latest changes. To test your decorators, you can leverage the built in GUI for Azure DevOps to validate your YAML syntax, as well as executing any build pipeline with the appropriate trigger conditions we have configured previously. In our demo application environment, I updated the out-of-the-box starter pipeline to include our connection variable group, as well as specify the testDecorators flag to true: variables: - name: testDecorator value: true - group: demo-connection-details Running the pipeline, I can now see the tasks I have defined execute as expected: Once we verify that the pre and post tasks have run as expected with the conditional controls evaluating in a similar manner, we can then conclude this demo. Conclusion Now with the decorator's scaffolding in place, the customer can continue to take advantage of the flexibility provided by Azure DevOps pipeline's YAML schema to implement their existing security policies at the organization level. I hope this post helped bring understanding to how pipeline decorators can be leveraged to automate custom processes and bring governance layers into your ADO environment. If you have any questions or concerns around this demo, or would like to continue the conversation around potential customer scenarios, please feel free to reach out any time.4.8KViews2likes0CommentsMWT Webcast - Microsoft Teams Governance and Adoption in Healthcare & Life Sciences
On January 7th at 12 noon eastern please join us for the interactive webcast " Microsoft Teams Governance and Adoption in Healthcare & Life Sciences." We are happy to host Microsoft Ignite Speaker Karuana Gatimu (Principal Program Manager, Microsoft Teams) who will be presenting this session live online and answering your questions.Extending Office 365 Operation Management and Governance in HLS with AvePoint – Michael on the Go
In this Michael on the Go video AvePoint’s Nick Carr (Vice President, Americas Partner Program) discusses how AvePoint can help Healthcare and Life Sciences customers to enhance and extend their Management and Governance of Office 365 (including Microsoft Teams!) with AvePoint.