azure
873 Topics- Get Started with Git and GitHub: Why Every DevOps Professional Needs These Skills NowWhy Git and GitHub Are Non-Negotiable in DevOps In today’s fast-paced DevOps environments, mastering the basics of git and GitHub isn’t just a nice-to-have. It’s essential. Whether you’re a developer, operator, architect, or any other role in the software delivery pipeline, your ability to collaborate, track changes, and automate workflows depends on these tools. If you’re not using git and GitHub, you’re falling behind. Every piece of code—whether it’s application logic, Infrastructure as Code (IaC), CI/CD pipelines, or automation scripts—should live in a git repository. Why? Because version control is the backbone of modern software delivery. It enables teams to work together, experiment safely, and recover quickly from mistakes. Without it, collaboration breaks down, and innovation stalls. All code—including application logic, Infrastructure as Code (IaC), CI/CD pipelines, and automation scripts—should be stored in a git repository. Version control serves as the foundation of modern software delivery. It allows teams to collaborate efficiently, experiment with new ideas safely, and recover quickly from errors. Without version control, effective collaboration becomes difficult and innovation is hindered. The Basics: What Is Git? Git is a distributed version control system. It lets you track changes to files, collaborate with others, and maintain a history of your project. Here are the core concepts: Repository (repo): A collection of files and their history. You can create a repo locally or host it on a platform like GitHub. Branching: Create a separate line of development to work on features or fixes without affecting the main codebase. Merging: Combine changes from different branches. This is how teams bring their work together. Push and Pull: Push sends your changes to a remote repository (like GitHub). Pull fetches changes from the remote repo to your local machine. Why All Code Belongs in Git Traceability: Every change is tracked. You know who changed what and when. Collaboration: Multiple people can work on the same project without overwriting each other’s work. Automation: Tools like GitHub Actions automate testing, deployment, and more. Recovery: Mistakes happen. With git, you can roll back to a previous state. Experimentation: It’s trivial to create a branch to experiment with. Git and GitHub: The Power Duo While git manages your code locally, GitHub provides a cloud-based platform for hosting, sharing, and collaborating on git repositories. GitHub adds features like pull requests, issues, CI/CD, and integrated advanced security. Learn more about GitHub Microsoft Learn: Introduction to Git GitHub Advanced Security Git Workflow Components Working Directory Definition: The working directory is the local folder on your machine where you actively edit and modify files. Purpose: It reflects the current state of your project, including untracked, modified, or deleted files. Command Used: git add: Moves changes from the working directory to the staging area. git merge or git checkout: Applies changes from the local repository to the working directory. Staging Area (Index) Definition: A preparatory space where changes are listed before committing them to the repository. Purpose: Allows you to selectively group changes into coherent commits. Command Used: git commit: Records the staged changes into the local repository. Local Repository Definition: A hidden “. git” directory on your machine that stores the complete history of commits and branches. Purpose: Acts as your personal version control database, independent of any remote server. Commands Used: git push: Sends committed changes to the remote repository. git fetch or git pull: Retrieves updates from the remote repository. git merge or git checkout: Integrates or switches to changes from the local repository. Remote Repository Definition: A version of your repository hosted on a server (e.g., GitHub, GitLab, Bitbucket) for collaboration and backup. Purpose: Enables distributed development by allowing multiple users to share and synchronize code. Commands Used: git fetch: Downloads changes from the remote repository without merging. git pull: Combines git fetch and git merge to update your local repository. git push: Uploads your local commits to the remote repository. Summary of Git Commands in the Diagram Command Direction of Flow Function git add Working Directory → Staging Area Prepares changes for commit git commit Staging Area → Local Repository Saves changes to local history git push Local Repository → Remote Repository Shares changes with others git fetch/pull Remote Repository → Local Repository Retrieves updates from others git merge/checkout Local Repository → Working Directory Applies or switches to committed changes locally Git Cheat Sheet: Your Quick Start Guide Task Command Example/Notes Set up your identity git config --global user.name "Your Name" git config --global user.email "you@example.com" Sets your global name and email for commits. Create a new repository git init Initializes a new local repository. Clone an existing repository git clone [URL] Copies a remote repository to your machine.Replace [URL] with the repository address. Check status of your files git status Shows changed, staged, and untracked files. Add files to staging git add <filename> Stages a file for commit.Replace <filename> with your file name. Commit your changes git commit -m "Describe your change" Saves your staged changes with a message. Create a new branch git branch <branch-name>git checkout <branch-name> Creates and switches to a new branch.Replace <branch-name> with your branch name. Merge a branch into main git checkout maingit merge <branch-name> Switches to main branch and merges another branch into it. Push changes to remote git push origin <branch-name> Uploads your branch changes to the remote repository. Pull changes from remote git pull origin main Downloads and integrates changes from the remote main branch. Sample command-line usage: # Set up your identity git config --global user.name "Your Name" git config --global user.email "you@example.com" # Create a new repository git init # Clone an existing repository git clone https://github.com/owner/repo.git # Check status of your files git status # Add file(s) to staging git add <filename> # Commit your changes git commit -m "Describe your change" # Create a new branch git branch <branch-name> git checkout <branch-name> # Merge a branch into main git checkout main git merge <branch-name> # Push changes to remote git push origin <branch-name> # Pull changes from remote git pull origin main For a more detailed cheat sheet, check out: GitHub’s official guide. Call to Action: Level Up Your DevOps Game Don’t wait—start learning git and GitHub today. These skills will make you a better collaborator, a more effective problem solver, and a key contributor to any DevOps team. Explore more with these resources: Microsoft Learn: Introduction to Git GitHub Docs GitHub Learning Lab Act now! Your next project, your team, and your career will thank you. Now it’s your turn I would love to know in the comment what has been your experience with git. What have you found useful for your project? What challenges have you experienced? About the author Jean-François Bilodeau, or J-F, brings over 30 years of experience in the tech industry, starting as a developer for anti-virus companies before moving into game development and technical training. An award-winning game developer and trainer, he is now a Microsoft employee passionate about empowering every person and organization on the planet to achieve more. Fluent in French and English, J-F combines technical expertise in development, classroom training, and AI with a mission to share his passion for learning.
- Mastering the AZ-104 Exam: Golu’s Guide to Passing & Levelling Up Your Cloud Career!The AZ-104 exam is designed to test your knowledge and skills in administering Azure services. Passing this exam will validate your expertise in Azure administration and enhance your career opportunities in cloud computing. Being the Global Courseware Lead for AZ-104 Course and as a Microsoft Technical Trainer at Microsoft, I got to work with a lot of different versions of this course and since I have a background in Azure Administration and also architecting solutions for various customers over the span of my 10+ years of IT journey, I want to highlight my strategy to prepare for this exam, and most of these best practices hold true for most Microsoft certifications out there. You are in the shoes of Golu today and will be going through the process of planning and preparing for the exam. To ace the exam Golu needs to follow a Seven steps approach (Image Source: “Microsoft Copilot”) Step 1: Understand the exam Objectives Before you start preparing for the AZ-104 exam, it's essential to understand the exam objectives. The exam tests your knowledge in various areas such as Identity and governance, compute, storage, networking, and Monitoring. Microsoft provides a detailed exam study guide that outlines the topics covered in the exam. Read through the Study guide to get an understanding of the exam objectives and the skills you need to master. Golu Understanding Exam Objectives (Image Source: “Microsoft Copilot”) Here are the topics and Weights for each of these topics, higher the weights more questions you are likely going to see in the exam. Always refer to the AZ-104 Study Guide as these weights do change as the content is updated. Skills Measured Manage Azure identities and governance (20–25%) Implement and manage storage (15–20%) Deploy and manage Azure compute resources (20–25%) Implement and manage virtual networking (15–20%) Monitor and maintain Azure resources (10–15%) Step 2: Develop a Study Plan After you understand the exam objectives, it's time to develop a study plan. The study plan should include the time you'll dedicate to studying, the study materials you'll use, and the practice tests you'll take. Allocate enough time to study all the exam topics thoroughly and break down your study sessions into manageable chunks. This will help you stay motivated and focused throughout the study period. Golu’s Dilemma: How much time do I need for my Prep? Image Source: "Microsoft Copilot" Daily Azure users (2-3+ years): 1 week of focused review may be enough. Intermediate (6-12 months): Plan for 4-6 weeks of study. Beginners (0-6 months): Allow 6-8 weeks, focusing on hands-on labs and foundational concepts. Tip: Track your progress weekly and adjust your plan as needed. Consistency is key! Experience mapping Just make sure you create a study plan and study at a specific time what works for you. Example times are shown below. A lot of Studies confirm that studying at a particular schedule everyday helps move information from short term to long term memory and helps retention of complex topics. Study Schedule PRO TIP: Example study plan download your copy Create a plan adding all topics covered in study guide and add time to complete labs and practice exams, also you can either book the exam at the start of the month which gives you a goal to complete exam in 1 month or 1.5 month, don’t take more than 2 months to take the exam otherwise you will lose motivation to take the exam. Copilot Prompt: Create a study plan for AZ-104 exam in csv in a monthly format for the month of <enter month>2026, the table should contain all topics covered in study guide on this link https://learn.microsoft.com/en-us/certifications/resources/study-guides/az-104 and add time to complete labs and practice exams. The plan should be 1 month long: AZ-104 exam study plan example Step 3: Follow Microsoft learn to prepare topic wise (Self Learning) Golu Following Microsoft Learn Path (Image Source: “Microsoft Copilot”) The easiest way to find AZ-104 structured content is to leverage Free Microsoft learn content for the AZ-104 exam, Study topic wise as per the study plan Golu has created, Golu can also leverage Microsoft Copilot to simplify some topics and generate summary. Microsoft Learn link: Course AZ-104T00-A: Microsoft Azure Administrator Copilot Prompt: Create a table highlighting the major differences between Active Directory Domain Services and Entra ID and explain these to me like I am studying it for the first time. Here is the link I am studying from Compare Microsoft Entra ID and Active Directory Domain Services Step 4: Familiarize yourself with Azure Services Golu Understanding azure services in the exam (Image Source: “Microsoft Copilot”) The AZ-104 exam covers a broad range of Azure AI services. It's essential to have a solid understanding of each service and how it works. All Azure services are available on this link, Golu can create a custom excel sheet and add one liner about each of the services covered in the exam so that he know what the service is used for, this is very useful in exam to eliminate wrong options and also to make guesstimates during the exam, refer, example below Go to Microsoft Copilot and use below prompt: Copilot Prompt: "Use the Study Guide on this link and create a table for AZ-104 services covered in the Exam Study guide for https://learn.microsoft.com/en-us/credentials/certifications/resources/study-guides/az-104. The table should contain these column's: Service name, category and purpose of the service" | Try in Copilot Chat Example Output for Azure Services Covered in AZ-104 exam. Step 5: Read FAQs Golu reading FAQs (Image Source: “Microsoft Copilot”) Frequently Asked Questions (FAQs) about Azure Services is a great way to understand important aspects of various services. You can check there the most common questions and answers about that specific topic; this helps you discover aspects of the service which you might not discover normally. For example: Virtual machine FAQ's Virtual Network FAQ's To get FAQ for any service, search "Service name" + "FAQ" + "MS Learn" or use below copilot prompt Copilot Prompt: “Use the Study Guide on this link and create a table for AZ-104 Study guide for https://learn.microsoft.com/en-us/credentials/certifications/resources/study-guides/az-104. The table should contain these columns: Service name, category and FAQ link" | Try in Copilot Chat Step 6: Are you ready for the exam? Is Golu Really ready for the exam? Golu Confused (Image Source: “Microsoft Copilot”) 6a: Take Practice tests Take multiple AZ-104 practice exams and see how you are scoring on them, if you constantly score above 90%, you are good to take the exam as you know all topics covered in the exam, look at what areas you are scoring poorly during practice and ensure you study the topics to get a better understanding of what is required to answer that question. Eliminate weaker areas to create an agile study plan and repeat this until you score well on these exams at least target >85% Microsoft gives a free practice assessment. This is a good start to see how well you know the Azure Administration topics. Here is a Copilot prompt that can act as a personal tutor for Golu. Copilot Prompt: Give me 5 questions for AZ-104 module 1 from the topic covered in https://learn.microsoft.com/en-us/credentials/certifications/resources/study-guides/az-104 with increasing difficulty, starting with beginner level, if i am able to answer the questions increase the difficulty to intermediate and then advance if i dont answer the question please decrease the level, please make sure not to give me the answers before i have answered all 5 questions in one section. The questions can be yes no, multichoice, case study or just direct textual answer from learner Pro tip: You need 700 score to pass the exam out of 1000, but you won't know how many points each question has in the 40-60 questions of the exam. 6b: Identify and address knowledge Gaps After taking practice tests, identify the areas where you need improvement. Focus your study efforts on these areas and use additional study materials to address your knowledge gaps. Revisit the Microsoft learn documentation and study materials to ensure that you fully understand the topics you struggled with. 6c: Exam Strategies Identify key words in the question and look for options that align with those key words, example if you see SAML, OAuth, Open ID connect or cloud Identity in the question think of Entra ID service, if you see SMB, NFS in the question think of Azure Files service, if you see VHD, Images, log files and Text Files think of Blobs service in the answer. Method of Elimination - eliminate all options that do not feel relevant to the question, and this will narrow down your answer to 1 or 2 options. Increasing the probability of you answering the correct answer. Key Ask - you may be given a long passage to ready make sure you read and understand what is required in the question, which is usually the last 2 lines, read the question twice and mark the question for review if you are unsure Guess the answer? Never leave any question unanswered as there is no negative marking, make informed guess and move on to the next question, never go to the next page without answering the question as sometimes you can’t go back to review questions of certain sections Time Management is key to passing the exam, the case studies will take a lot of time to read and answer, so make sure you don’t get stuck on any question, you either know it or don’t, move on to the next one and review the questions later. Also Mind that case studies can take a lot more time than multi-choice questions. Check the clock all the time and be conscious about many questions are remaining and see time per question, so that you don't miss out on answering any questions. Create Mind Maps and Flashcards during your preparation- Leverage below copilot prompts to generate topic-wise flashcards and mind maps. Also, you can find the neuroscience of why mind maps and Flashcards are important, along with a lot of procreated mind maps and flashcards on our open-source project website https://aka.ms/MTTBrainwave Copilot Prompt: Please develop a mind map in mermaid for the AZ-104 on https://learn.microsoft.com/en-us/credentials/certifications/resources/study-guides/az-104 | Try in Copilot Chat Mermaid Mind map for Module-01 (Image Source: “Mermaid on Microsoft Copilot”) Copilot Prompt: Please develop flashcards for of AZ-104 course on https://learn.microsoft.com/en-us/credentials/certifications/resources/study-guides/az-104 | Try in Copilot Chat Flashcards output on Microsoft Copilot (Image Source: “Microsoft Copilot”) Step 07: Be Prepared for the exam day Exam format The exam format is detailed below and the type of questions you will see on the exam can be understood by going to this link aka.ms/ExploreMicrosoftExams The exam has 40-60 questions with a mix of multi-choice, drag and drop, yes/no questions and case studies, the types of questions can be seen on this simulation sandbox environment. Launch the sandbox Exams with or without labs will have different duration as mentioned below. Exam Objectives (Image Source: “Microsoft Learn”) On exam day, make sure you're well-rested and have eaten a good meal. If you are writing the exam in person, arrive at the exam center early. A valid government-issued ID will be required. During the exam, take your time to read the questions carefully and double-check your answers before submitting. Remember to stay calm and focused throughout the exam. If you are taking exam from home or office ensure you follow all guidelines to stay compliant by going to appropriate vendor website, here is a link to test your system if it is good for taking the exam. Test your system And lastly there are no failures if you fail to pass the exam, take this as a learning opportunity and prepare well for the next attempt, All the best for your preparation and do share your stories with me on LinkedIn, it always brings me joy to see your experiences with these exams. Golu is finally ready to pass the exam. (Image Source: “Microsoft Copilot”) About the Instructor Neeraj Kumar is a Microsoft Technical Trainer based in the Delhi region of India. With over 10 years of experience, he is deeply passionate about Artificial Intelligence, Azure, and Security. Guided by the mantra “Be 1% better every day,” Neeraj strives for continuous growth and excellence in his field. Feel free to connect with him on LinkedIn https://www.linkedin.com/in/neerajtrainer/ #MicrosoftLearn #SkilledByMTT
- Trusted Signing Public Preview UpdateNearly a year ago we announced the Public Preview of Trusted Signing with availability for organizations with 3 years or more of verifiable history to onboard to the service to get a fully managed code signing experience to simplify the efforts for Windows app developers. Over the past year, we’ve announced new features including the Preview support for Individual Developers, and we highlighted how the service contributes to the Windows Security story at Microsoft BUILD 2024 in the Unleash Windows App Security & Reputation with Trusted Signing session. During the Public Preview, we have obtained valuable insights on the service features from our customers, and insights into the developer experience as well as experience for Windows users. As we incorporate this feedback and learning into our General Availability (GA) release, we are limiting new customer subscriptions as part of the public preview. This approach will allow us to focus on refining the service based on the feedback and data collected during the preview phase. The limit in new customer subscriptions for Trusted Signing will take effect Wednesday, April 2, 2025, and make the service only available to US and Canada-based organizations with 3 years or more of verifiable history. Onboarding for individual developers and all other organizations will not be directly available for the remainder of the preview, and we look forward to expanding the service availability as we approach GA. Note that this announcement does not impact any existing subscribers of Trusted Signing, and the service will continue to be available for these subscribers as it has been throughout the Public Preview. For additional information about Trusted Signing please refer to Trusted Signing documentation | Microsoft Learn and Trusted Signing FAQ | Microsoft Learn.5.2KViews7likes21Comments
- Issue when ingesting Defender XDR table in SentinelHello, We are migrating our on-premises SIEM solution to Microsoft Sentinel since we have E5 licences for all our users. The integration between Defender XDR and Sentinel convinced us to make the move. We have a limited budget for Sentinel, and we found out that the Auxiliary/Data Lake feature is sufficient for verbose log sources such as network logs. We would like to retain Defender XDR data for more than 30 days (the default retention period). We implemented the solution described in this blog post: https://jeffreyappel.nl/how-to-store-defender-xdr-data-for-years-in-sentinel-data-lake-without-expensive-ingestion-cost/ However, we are facing an issue with 2 tables: DeviceImageLoadEvents and DeviceFileCertificateInfo. The table forwarded by Defender to Sentinel are empty like this row: We created a support ticket but so far, we haven't received any solution. If anyone has experienced this issue, we would appreciate your feedback. Lucas21Views0likes0Comments
- Azure Integrated HSM: New Chapter&Shift from Centralized Clusters to Embedded Silicon-to-Cloud TrustAzure Integrated HSM marks a major shift in how cryptographic keys are handled—moving from centralized clusters… to local, tamper‑resistant modules embedded directly in virtual machines. This new model brings cryptographic assurance closer to the workload, reducing latency, increasing throughput, and redefining what’s possible for secure applications in the cloud. Before diving into this innovation, let’s take a step back. Microsoft’s journey with HSMs in Azure spans nearly a decade, evolving through multiple architectures, vendors, and compliance models. From shared services to dedicated clusters, from appliance‑like deployments to embedded chips, each milestone reflects a distinct response to enterprise needs and regulatory expectations. Let’s walk through that progression — not as a single path, but as a layered portfolio that continues to expand. Azure Key Vault Premium, with nCipher nShield Around 2015, Microsoft made Azure Key Vault generally available, and soon after introduced the Premium tier, which integrated nCipher nShield HSMs (previously part of Thales, later acquired by Entrust). This was the first time customers could anchor their most sensitive cryptographic material in FIPS 140‑2 Level 2 validated hardware within Azure. Azure Key Vault Premium is delivered as a fully managed PaaS service, with HSMs deployed and operated by Microsoft in the backend. The service is redundant and highly available, with cryptographic operations exposed through Azure APIs while the underlying HSM infrastructure remains abstracted and secure. This enabled two principal cornerstone scenarios. Based on the Customer Encryption Key (CEK) model, customers could generate and manage encryption keys directly in Azure, always protected by HSMs in the backend. Going further with the Bring Your Own Key (BYOK) model, organizations could generate keys in their own on‑premises HSMs and then securely import and manage them into Azure Key Vault–backed HSMs. These capabilities were rapidly adopted across Microsoft’s second-party services. For example, they underpin the master key management for Azure RMS, later rebranded as Azure Information Protection, and now part of Microsoft Purview Information Protection. These HSM-backed keys can protect the most sensitive data if customers choose to implement the BYOK model through Sensitivity Labels, applying encryption and strict usage controls to protect highly confidential information. Other services like Service Encryption with Customer Key allow customers to encrypt all their data at rest in Microsoft 365 using their own keys, via Data Encryption Policies. This applies to data stored in Exchange, SharePoint, OneDrive, Teams, Copilot, and Purview. This approach also applies to Power Platform, where customer-managed keys can encrypt data stored in Microsoft Dataverse, which underpins services like Power Apps and Power Automate. Beyond productivity services, Key Vault Premium became a building block in hybrid customer architectures: protecting SQL Server Transparent Data Encryption (TDE) keys, storing keys for Azure Storage encryption or Azure Disk Encryption (SSE, ADE, DES), securing SAP workloads running on Azure, or managing TLS certificates for large‑scale web applications. It also supports custom application development and integrations, where cryptographic operations must be anchored in certified hardware — whether for signing, encryption, decryption, or secure key lifecycle management. Around 2020, Azure Key Vault Premium benefit from a shift away from the legacy nCipher‑specific BYOK process. Initially, BYOK in Azure was tightly coupled to nCipher tooling, which limited customers to a single vendor. As the HSM market evolved and customers demanded more flexibility, Microsoft introduced a multi‑vendor BYOK model. This allowed organizations to import keys from a broader set of providers, including Entrust, Thales, and Utimaco, while still ensuring that the keys never left the protection of FIPS‑validated HSMs. This change was significant: it gave customers freedom of choice, reduced dependency on a single vendor, and aligned Azure with the diverse HSM estates that enterprises already operated on‑premises. Azure Key Vault Premium remains a cornerstone of Azure’s data protection offerings. It’s widely used for managing keys, secrets (passwords, connection strings), and certificates. Around February 2024 then with a latest firmware update in April 2025, Microsoft and Marvel has announced the modernization of the Key Vault HSM backend to meet newer standards: Azure’s HSM pool has been updated with Marvell LiquidSecurity adapters that achieved FIPS 140-3 Level 3 certification. This means Key Vault’s underpinnings are being refreshed to the latest security level, though the service interface for customers remains the same. [A tip for Tech guys: you can check the HSM backend provider by looking at the FIPS level in the "hsmPlatform" key attribute]. Key Vault Premium continues to be the go-to solution for many scenarios where a fully managed, cloud-integrated key manager with a shared HSM protection is sufficient. Azure Dedicated HSM, with SafeNet Luna In 2018, Microsoft introduced Azure Dedicated HSM, built on SafeNet Luna hardware (originally Gemalto, later part of Thales). These devices were validated to FIPS 140‑2 Level 3, offering stronger tamper resistance and compliance guarantees. This service provided physically isolated HSM appliances, deployed as single-tenant instances within a customer’s virtual network. By default, these HSMs were non-redundant, unless customers explicitly provisioned multiple units across regions. Each HSM was connected to a private subnet, and the customer retained full administrative control over provisioning, partitioning, and policy enforcement. Unlike Key Vault, using a Dedicated HSM meant the customer had to manage a lot more: HSM user management, key backup (if needed), high availability setup, and any client access configuration. Dedicated HSM was particularly attractive to regulated industries such as finance, healthcare, and government, where compliance frameworks demanded not only FIPS‑validated hardware but also the ability to define their own cryptographic domains and audit processes. Over time, however, Microsoft evolved its HSM portfolio toward more cloud‑native and scalable services. Azure Dedicated HSM is now being retired: Microsoft announced that no new customer onboardings are accepted as of August 2025, and that full support for existing customers will continue until July 31, 2028. Customers are encouraged to plan their transition, as Azure Cloud HSM will succeed Dedicated HSM. Azure Key Vault Managed HSM, with Marvell LiquidSecurity By 2020, it was evident that Azure Key Vault (with shared HSMs) and Dedicated HSM (with single-tenant appliances) represented two ends of a spectrum, and customers wanted something in between: the isolation of a dedicated HSM and the ease-of-use of a managed cloud service. In 2021, Microsoft launched Azure Key Vault Managed HSM, a fully managed, highly available service built on Marvell LiquidSecurity adapters, validated to FIPS 140‑3 Level 3. The key difference with Azure Key Vault Premium lies in the architecture and assurance model. While AKV Premium uses a shared pool of HSMs per Azure geography, organized into region-specific cryptographic domains based on nShield technology — which enforces key isolation through its Security World architecture — Managed HSM provides dedicated HSM instances per customer, ensuring stronger isolation. Also delivered as a PaaS service, it is redundant by design, with built‑in clustering and high availability across availability zones; and fully managed in terms of provisioning, configuration, patching, and maintenance. Managed HSM consists of a cluster of multiple HSM partitions, each based on a separate customer-specific security domain that cryptographically isolates every tenant. Managed HSM supports the same use cases as AKV Premium — CEK, BYOK for Azure RMS or SEwCK, database encryption keys, or any custom integrations — but with higher assurance, stronger isolation, and FIPS 140‑3 Level 3 compliance. Azure Payment HSM, with Thales payShield 10K Introduced in 2022, Azure Payment HSM is a bare-metal, single-tenant service designed specifically for regulated payment workloads. Built on Thales payShield 10K hardware, it meets stringent compliance standards including FIPS 140-2 Level 3 and PCI HSM v3. Whereas Azure Dedicated HSM was built for general-purpose cryptographic workloads (PKI, TLS, custom apps), Payment HSM is purpose-built for financial institutions and payment processors, supporting specialized operations like PIN block encryption, EMV credentialing, and 3D Secure authentication. The service offers low-latency, high-throughput cryptographic operations in a PCI-compliant cloud environment. Customers retain full administrative control and can scale performance from 60 to 2500 CPS, deploying HSMs in high-availability pairs across supported Azure regions. Azure Cloud HSM, with Marvell LiquidSecurity In 2025, Microsoft introduced Azure Cloud HSM, also based on Marvell LiquidSecurity, as a single‑tenant, cloud‑based HSM cluster. These clusters offer a private connectivity and are validated to FIPS 140‑3 Level 3, ensuring the highest level of assurance for cloud‑based HSM services. Azure Cloud HSM is now the recommended successor to Azure Dedicated HSM and gives customers direct administrative authority over their HSMs, while Microsoft handles availability, patching, and maintenance. It is particularly relevant for certificate authorities, payment processors, and organizations that need to operate their own cryptographic infrastructure in the cloud but do not want the burden of managing physical hardware. It combines sovereignty and isolation with the elasticity of cloud operations, making it easier for customers to migrate sensitive workloads without sacrificing control. A single Marvell LiquidSecurity2 adapter can manage up to 100,000 key pairs and perform over one million cryptographic operations per second, making it ideal for high-throughput workloads such as document signing, TLS offloading, and PKI operations. In contrast to Azure Dedicated HSM, Azure Cloud HSM simplifies deployment and management by offering fast provisioning, built-in redundancy, and centralized operations handled by Microsoft. Customers retain full control over their keys while benefiting from secure connectivity via private links and automatic high availability across zones — without the need to manually configure clustering or failover. Azure Integrated HSM, with Microsoft Custom Chips In 2025, Microsoft finally unveiled Azure Integrated HSM, a new paradigm, shifting from a shared cryptographic infrastructure to dedicated, hardware-backed modules integrated at the VM level: custom Microsoft‑designed HSM chips are embedded directly into the host servers of AMD v7 virtual machines. These chips are validated to FIPS 140‑3 Level 3, ensuring that even this distributed model maintains the highest compliance standards. This innovation allows cryptographic operations to be performed locally, within the VM boundary. Keys are cached securely, hardware acceleration is provided for encryption, decryption, signing, and verification, and access is controlled through an oracle‑style model that ensures keys never leave the secure boundary. The result is a dramatic reduction in latency and a significant increase in throughput, while still maintaining compliance. This model is particularly well suited for TLS termination at scale, high‑frequency trading platforms, blockchain validation nodes, and large‑scale digital signing services, where both performance and assurance are critical. Entered public preview in September 2025, Trusted Launch must be enabled to use the feature, and Linux support is expected soon. Microsoft confirmed that Integrated HSM will be deployed across all new Azure servers, making it a foundational component of future infrastructure. Azure Integrated HSM also complements Azure Confidential Computing, allowing workloads to benefit from both in-use data protection through hardware-based enclaves and key protection via local HSM modules. This combination ensures that neither sensitive data nor cryptographic keys ever leave a secure hardware boundary — ideal for high-assurance applications. A Dynamic Vendor Landscape The vendor story behind these services is almost as interesting as the technology itself. Thales acquired nCipher in 2008, only to divest it in 2019 during its acquisition of Gemalto, under pressure from competition authorities. The buyer was Entrust, which suddenly found itself owning one of the most established HSM product lines. Meanwhile, Gemalto’s SafeNet Luna became part of Thales — which would also launch the Thales payShield 10K in 2019, leading PCI-certified payment HSM — and Marvell emerged as a new force with its LiquidSecurity line, optimized for cloud-scale deployments. Microsoft has navigated these shifts pragmatically, adapting its services and partnerships to ensure continuity for customers while embracing the best available hardware. Looking back, it is almost amusing to see how vendor mergers, acquisitions, and divestitures reshaped the HSM market, while Microsoft’s offerings evolved in lockstep to give customers a consistent path forward. Comparative Perspective Looking back at the evolution of Microsoft’s HSM integrations and services, a clear trajectory emerges: from the early days of Azure Key Vault Premium backed by certified HSMs (still active), completed by Azure Key Vault Managed HSM with higher compliance levels, through the Azure Dedicated HSM offering, replaced by the more cloud‑native Azure Cloud HSM, and finally to the innovative Azure Integrated HSM embedded directly in virtual machines. Each step reflects a balance between control, management, compliance, and performance, while also adapting to the vendor landscape and regulatory expectations. Service Hardware Introduced FIPS Level Model / Isolation Current Status / Notes Azure Key Vault Premium nCipher nShield (Thales → Entrust) Then Marvell LiquidSecurity 2015 FIPS 140‑2 Level 2 > Level 3 Shared per region, PaaS, HSM-backed Active; standard service; supports CEK and BYOK; multi-vendor BYOK since ~2020 Azure Dedicated HSM SafeNet Luna (Gemalto → Thales) 2018 FIPS 140‑2 Level 3 Dedicated appliance, single-tenant, VNet Retiring; no new onboardings; support until July 31, 2028; succeeded by Azure Cloud HSM Azure Key Vault Managed HSM Marvell LiquidSecurity 2021 FIPS 140‑3 Level 3 Dedicated cluster per customer, PaaS Active; redundant, isolated, fully managed; stronger compliance than Premium Azure Payment HSM Thales payShield 10K 2022 FIPS 140-2 Level 3 Bare-metal, single-tenant, full customer control, PCI-compliant Active. Purpose-built for payment workloads. Azure Cloud HSM Marvell LiquidSecurity 2025 FIPS 140‑3 Level 3 Single-tenant cluster, customer-administered Active; successor to Dedicated HSM; fast provisioning, built-in HA, private connectivity Azure Integrated HSM Microsoft custom chips 2025 FIPS 140‑3 Level 3 Embedded in VM host, local operations Active (preview/rollout); ultra-low latency, ideal for high-performance workloads Microsoft’s strategy shows an understanding that different customers have different requirements on the spectrum of control vs convenience. So Azure didn’t take a one-size-fits-all approach; it built a portfolio: - Use Azure Key Vault Premium if you want simplicity and can tolerate multi-tenancy. - Use Azure Key Vault Managed HSM if you need sole ownership of keys but want a turnkey service. - Use Azure Payment HSM if you operate regulated payment workloads and require PCI-certified hardware. - Use Azure Cloud HSM if you need sole ownership and direct access for legacy apps. - Use Azure Integrated HSM if you need ultra-low latency and per-VM key isolation, for the highest assurance in real-time. Beyond the HSM: A Silicon-to-Cloud Security Architecture by Design Microsoft’s HSM evolution is part of a broader strategy to embed security at every layer of the cloud infrastructure — from silicon to services. This vision, often referred to as “Silicon-to-Cloud”, includes innovations like Azure Boost, Caliptra, Confidential Computing, and now Azure Integrated HSM. Azure Confidential Computing plays a critical role in this architecture. As mentioned, by combining Trusted Execution Environments (TEEs) with Integrated HSM, Azure enables workloads to be protected at every stage — at rest, in transit, and in use — with cryptographic keys and sensitive data confined to verified hardware enclaves. This layered approach reinforces zero-trust principles and supports compliance in regulated industries. With Azure Integrated HSM installed directly on every new server, Microsoft is redefining how cryptographic assurance is delivered — not as a remote service, but as a native hardware capability embedded in the compute fabric itself. This marks a shift from centralized HSM clusters to distributed, silicon-level security, enabling ultra-low latency, high throughput, and strong isolation for modern cloud workloads. Resources To go a bit further, I invite you to check out the following articles and take a look at the related documentation. Protecting Azure Infrastructure from silicon to systems | Microsoft Azure Blog by Mark Russinovich, Chief Technology Officer, Deputy Chief Information Security Officer, and Technical Fellow, Microsoft Azure, Omar Khan, Vice President, Azure Infrastructure Marketing, and Bryan Kelly, Hardware Security Architect, Microsoft Azure Microsoft Azure Introduces Azure Integrated HSM: A Key Cache for Virtual Machines | Microsoft Community Hub by Simran Parkhe Securing Azure infrastructure with silicon innovation | Microsoft Community Hub by Mark Russinovich, Chief Technology Officer, Deputy Chief Information Security Officer, and Technical Fellow, Microsoft Azure About the Author I'm Samuel Gaston-Raoul, Partner Solution Architect at Microsoft, working across the EMEA region with the diverse ecosystem of Microsoft partners—including System Integrators (SIs) and strategic advisory firms, Independent Software Vendors (ISVs) / Software Development Companies (SDCs), and Startups. I engage with our partners to build, scale, and innovate securely on Microsoft Cloud and Microsoft Security platforms. With a strong focus on cloud and cybersecurity, I help shape strategic offerings and guide the development of security practices—ensuring alignment with market needs, emerging challenges, and Microsoft’s product roadmap. I also engage closely with our product and engineering teams to foster early technical dialogue and drive innovation through collaborative design. Whether through architecture workshops, technical enablement, or public speaking engagements, I aim to evangelize Microsoft’s security vision while co-creating solutions that meet the evolving demands of the AI and cybersecurity era.
- Security Copilot Agents: The New Era of AI, Driven Cyber DefenseWith increasing cyber threats, security teams require intelligent agents that adapt and operate throughout the security stack, not just automation. Key statistics from our Microsoft Digital Defense Report 2024 which highlights this concerning trend of Cybersecurity threats: Over 600 million cyberattacks per day targeting Microsoft customers 2.75x increase in ransomware attacks year-over-year 400% surge in tech scams since 2022 Growing collaboration between cybercriminals and nation-state actors In my previous blogs, I explored how AI agents are transforming security operations in Microsoft Defender XDR, Intune, and Entra: Phishing Triage Agent in Defender XDR: Say Goodbye to False Positives and Analyst Fatigue Intune AI Agent: Instant Threat Defense, Invisible Protection Conditional Access Optimization Agent in Microsoft Entra Security Copilot Today, I’ll discuss how Security Copilot, Copilot for Azure in Azure, Defender for Cloud, and Security Copilot Agents in Microsoft Purview use AI to transform security, compliance, and efficiency across the Microsoft ecosystem. What Are Security Copilot Agents? Security Copilot Agents are modular, AI-driven assistants embedded in Microsoft’s security platforms. They automate, high-volume repetitive tasks, deliver actionable insights, and streamline incident responses. By leveraging large language models (LLMs), Microsoft’s global threat intelligence, and your organization’s data, these agents empower security teams to work smarter and faster. Microsoft Security Copilot agents overview Agents are available in both standalone and embedded experiences and can be discovered and configured directly within product portals like Defender, Sentinel, Entra, Intune, and Purview. Why Security Copilot Agents Matter Security Copilot Agents represent a paradigm shift in cyber defense: Automation at Scale: They handle high-volume repetitive tasks, freeing up human expertise for strategic initiatives. Adaptive Intelligence: Agents learn from feedback, adapt to workflows, and operate securely within Microsoft’s Zero Trust framework. Operational Efficiency: By reducing manual workloads, agents accelerate response, prioritize risks, and strengthen security posture. Microsoft Security Copilot Frequently Asked Questions Security Copilot Agents in Azure and Defender for Cloud Azure and Defender for Cloud now feature embedded Security Copilot and Copilot for Azure that help security professionals analyze, summarize, remediate, and delegate recommendations using natural language prompts. This integration streamlines security management by: Risk Exploration: Agents help admins identify misconfigured resources and focus on those posing critical risks, using natural language queries. Accelerated Remediation: Agents generate remediation scripts and automate pull requests, enabling rapid fixes for vulnerabilities. Noise Reduction: By filtering through alerts and recommendations, agents help teams focus on the most impactful remediations. Unified Experience: Security Copilot and Copilot for Azure work together to provide context, explain recommendations, and guide implementation steps, all within the Defender for Cloud portal. Microsoft Security Copilot in Defender for Cloud Security Copilot Agents in Microsoft Purview Microsoft Purview leverages Security Copilot agents to automate and scale Data Loss Prevention (DLP) and Insider Risk Management workflows. Here are more details: Alert Triage Agent (DLP): Evaluates alerts based on sensitivity, exfiltration, and policy risk, sorting them into actionable categories. Alert Triage Agent (Insider Risk): Assesses user, file, and activity risk, prioritizing alerts for investigation. Managed Alert Queue: Agents sift out high-risk activities from lower-risk noise, improving response time and team efficiency. Comprehensive Explanations: Agents provide clear logic behind alert categorization, supporting transparency and compliance. Deployment: Enabling Security Copilot can be done in: Azure portal https://portal.azure.com Security Copilot portal https://securitycopilot.microsoft.com. Security Copilot requires per-seat licenses for human users, while all agent operations are billed by Security Compute Units (SCUs) on a pay-as-you-go basis. Agents do not need separate per-seat licenses; their costs depend solely on SCU consumption, and they typically run under a service or managed identity in the Copilot environment. Security Copilot Agent Responsible AI FAQ Security Copilot Agents: Unified Across the Microsoft Security Ecosystem Security Copilot Agents automate intelligence and security orchestration across Microsoft’s ecosystem, including Defender, Sentinel, Entra, Intune, Azure, Purview, Threat Intelligence, and Office. Their unified design enables consistent protection, swift responses, and scalable automation for security teams. Operating across multiple platforms, these agents provide comprehensive coverage and efficient threat response. End-to-End Visibility: Agents correlate signals across domains, providing context, rich insights and automating common workflows. Custom Agent Creation: Teams can build custom agents using no-code tools, tailoring automation to their unique environments. Marketplace Integration: The new Security Store allows organizations to browse, deploy, and manage agents alongside conventional security tools, streamlining procurement and governance. Intune AI Agents: Device and Endpoint Management Intune AI Agents automate device compliance and endpoint security. They monitor configuration drift, remediate vulnerabilities, and enforce security baselines across managed devices. By correlating device signals with threat intelligence, these agents proactively identify risks and recommend mitigation actions, reducing manual workload and accelerating incident response. Defender for Cloud AI Agents: Threat Detection and Response Defender for Cloud AI Agents continuously analyze cloud workloads, network traffic, and user behavior to detect threats and suspicious activities. They automate alert triage, escalate high-risk events, and coordinate remediation actions across hybrid environments. Integration with other Copilot Agents ensures unified protection and rapid containment of cloud-based threats. Conditional Access Optimization Agent: Policy Automation The Conditional Access Optimization Agent evaluates authentication patterns, risk signals, and user activity to recommend and enforce adaptive access policies. It automates policy updates based on real-time threat intelligence, ensuring that only authorized users access sensitive resources while minimizing friction for legitimate users. Azure AI Agents: Cloud Security and Automation Azure AI Agents provide automated monitoring, configuration validation, and vulnerability management across cloud resources. They integrate with Defender for Cloud and Sentinel, enabling cross-platform correlation of security events and orchestration of incident response workflows. These agents help maintain compliance, optimize resource usage, and enforce best practices. Purview AI Agents: Compliance and Data Protection Purview AI Agents automate data classification, information protection, and compliance management for AI-powered applications and Copilot experiences. They enforce retention policies, flag sensitive data handling, and ensure regulatory compliance across organizational data assets. Their integration supports transparent security controls and audit-ready reporting. Phishing Triage Defender for Office AI Agents: Email Threat Automation Defender for Office AI Agents specialize in identifying, categorizing, and responding to phishing attempts. They analyze email metadata, attachments, and user interactions to detect malicious campaigns, automate alerting, and initiate containment actions. By streamlining phishing triage, these agents reduce investigation times and enhance protection against targeted attacks. Threat Intelligence Briefing Agent: Contextual Security Insights The Threat Intelligence Briefing Agent aggregates global threat intelligence, correlates it with local signals, and delivers actionable briefings to security teams. It highlights emerging risks, prioritizes vulnerabilities, and recommends remediation based on organizational context. This agent empowers teams with timely, relevant insights to anticipate and counter evolving threats. Marketplace Integration and Custom Agent Creation Organizations can leverage the Security Store to discover, deploy, and manage agents tailored to their specific needs. No-code tools facilitate custom agent creation, enabling rapid automation of unique workflows and seamless integration with existing security infrastructure. Getting Started To deploy Security Copilot Agents across the enterprise, make sure to Check Licensing: Ensure you have the required subscriptions and SCUs provisioned. Enable Agents: Use product portals to activate agents and configure settings. Integrate Across Products: Link agents for enhanced threat detection, compliance, and automated response. Monitor and Optimize: Use dashboards and reports to track effectiveness and refine policies. About the Author: Hi! Jacques “Jack” here, Microsoft Technical Trainer. As a technical trainer, I’ve seen firsthand how Security Copilot Agents accelerate secure modernization and empower teams to stay ahead of threats. Whether you’re optimizing identity protection, automating phishing triage, or streamlining endpoint remediation, these agents are your AI, powered allies in building a resilient security posture. #MicrosoftLearn #SkilledByMTT #MTTBloggingGroup
- Step-by-Step Guide: Integrating Microsoft Purview with Azure Databricks and Microsoft FabricCo-Authored By: aryananmol, laurenkirkwood and mmanley This article provides practical guidance on setup, cost considerations, and integration steps for Azure Databricks and Microsoft Fabric to help organizations plan for building a strong data governance framework. It outlines how Microsoft Purview can unify governance efforts across cloud platforms, enabling consistent policy enforcement, metadata management, and lineage tracking. The content is tailored for architects and data leaders seeking to execute governance in scalable, hybrid environments. Note: this article focuses mainly on Data Governance features for Microsoft Purview. Why Microsoft Purview Microsoft Purview enables organizations to discover, catalog, and manage data across environments with clarity and control. Automated scanning and classification build a unified view of your data estate enriched with metadata, lineage, and sensitivity labels, and the Unified Catalog gives business-friendly search and governance constructs like domains, data products, glossary terms, and data quality. Note: Microsoft Purview Unified Catalog is being rolled out globally, with availability across multiple Microsoft Entra tenant regions; this page lists supported regions, availability dates, and deployment plans for the Unified Catalog service: Unified Catalog Supported Regions. Understanding Data Governance Features Cost in Purview Under the classic model: Data Map (Classic), users pay for an “always-on” Data Map capacity and scanning compute. In the new model, those infrastructure costs are subsumed into the consumption meters – meaning there are no direct charges for metadata storage or scanning jobs when using the Unified Catalog (Enterprise tier). Essentially, Microsoft stopped billing separately for the underlying data map and scan vCore-hours once you opt into the new model or start fresh with it. You only incur charges when you govern assets or run data processing tasks. This makes costs more predictable and tied to governance value: you can scan as much as needed to populate the catalog without worrying about scan fees and then pay only for the assets you actively manage (“govern”) and any data quality processes you execute. In summary, Purview Enterprise’s pricing is usage-based and divided into two primary areas: (1) Governed Assets and (2) Data Processing (DGPUs). Plan for Governance Microsoft Purview’s data governance framework is built on two core components: Data Map and Unified Catalog. The Data Map acts as the technical foundation, storing metadata about assets discovered through scans across your data estate. It inventories sources and organizes them into collections and domains for technical administration. The Unified Catalog sits on top as the business-facing layer, leveraging the Data Map’s metadata to create a curated marketplace of data products, glossary terms, and governance domains for data consumers and stewards. Before onboarding sources, align Unified Catalog (business-facing) and Data Map (technical inventory) and define roles, domains, and collections so ownership and access boundaries are clear. Here is a documentation that covers roles and permissions in Purview: Permissions in the Microsoft Purview portal | Microsoft Learn. The imageabove helps understand therelationship between the primary data governance solutions, Unified Catalog and Data Map, and the permissions granted by the roles for each solution. Considerations and Steps for Setting up Purview Steps for Setting up Purview: Step 1: Create a Purview Account. In the Azure Portal, use the search bar at the top to navigate to Microsoft Purview Accounts. Once there, click “Create”. This will take you to the following screen: Step 2: Click Next: Configuration and follow the Wizard, completing the necessary fields, including information on Networking, Configurations, and Tags. Then click Review + Create to create your Purview account. Consideration: Private networking: Use Private Endpoints to secure Unified Catalog/Data Map access and scan traffic; follow the new platform private endpoints guidance in the Microsoft Purview portal or migrate classic endpoints. Once your Purview Account is created, you’ll want to set up and manage your organization’s governance strategy to ensure that your data is classified and managed according to the specific lifecycle guidelines you set. Note: Follow the steps in this guide to set up Microsoft Purview Data Lifecycle Management: Data retention policy, labeling, and records management. Data Map Best Practices Design your collections hierarchy to align with organizational strategy—such as by geography, business function, or data domain. Register each data source only once per Purview account to avoid conflicting access controls. If multiple teams consume the same source, register it at a parent collection and create scans under subcollections for visibility. The imageaboveillustrates a recommended approach for structuring your Purview DataMap. Why Collection Structure Matters A well-structured Data Map strategy, including a clearly defined hierarchy of collections and domains, is critical because the Data Map serves as the metadata backbone for Microsoft Purview. It underpins the Unified Catalog, enabling consistent governance, role-based access control, and discoverability across the enterprise. Designing this hierarchy thoughtfully ensures scalability, simplifies permissions management, and provides a solid foundation for implementing enterprise-wide data governance. Purview Integration with Azure Databricks Databricks Workspace Structure In Azure Databricks, each region supports a single Unity Catalog metastore, which is shared across all workspaces within that region. This centralized architecture enables consistent data governance, simplifies access control, and facilitates seamless data sharing across teams. As an administrator, you can scan one workspace in the region using Microsoft Purview to discover and classify data managed by Unity Catalog, since the metastore governs all associated workspaces in a region. If your organization operates across multiple regions and utilizes cross-region data sharing, please review the consideration and workaround outlined below to ensure proper configuration and governance. Follow pre-requisite requirements here, before you register your workspace: Prerequisites to Connect and manage Azure Databricks Unity Catalog in Microsoft Purview. Steps to Register Databricks Workspace Step 1: In the Microsoft Purview portal, navigate to the Data Map section from the left-hand menu. Select Data Sources. Click on Register to begin the process of adding your Databricks workspace. Step 2: Note: There are two Databricks data sources, please review documentation here to review differences in capability: Connect to and manage Azure Databricks Unity Catalog in Microsoft Purview | Microsoft Learn. You can choose either source based on your organization’s needs. Recommended is “Azure Databricks Unity Catalog”: Step 3: Register your workspace. Here are the steps to register your data source: Steps to Register an Azure Databricks workspace in Microsoft Purview. Step 4: Initiate scan for your workspace, follow steps here: Steps to scan Azure Databricks to automatically identify assets. Once you have entered the required information test your connection and click continue to set up scheduled scan trigger. Step 5: For Scan trigger, choose whether to set up a schedule or run the scan once according to your business needs. Step 6: From the left pane, select Data Map and select your data source for your workspace. You can view a list of existing scans on that data source under Recent scans, or you can view all scans on the Scans tab. Review further options here: Manage and Review your Scans. You can review your scanned data sources, history and details here: Navigate to scan run history for a given scan. Limitation: The “Azure Databricks Unity Catalog” data source in Microsoft Purview does not currently support connection via Managed Vnet. As a workaround, the product team recommends using the “Azure Databricks Unity Catalog” source in combination with a Self-hosted Integration Runtime (SHIR) to enable scanning and metadata ingestion. You can find setup guidance here: Create and manage SHIR in Microsoft Purview Choose the right integration runtime configuration Scoped scan support for Unity Catalog is expected to enter private preview soon. You can sign up here: https://aka.ms/dbxpreview. Considerations: If you have delta-shared Databricks-to-Databricks workspaces, you may have duplication in your data assets if you are scanning both Workspaces. The workaround for this scenario is as you add tables/data assets to a Data Product for Governance in Microsoft Purview, you can identify the duplicated tables/data assets using their Fully Qualified Name (FQN). To make identification easier: Look for the keyword “sharing” in the FQN, which indicates a Delta-Shared table. You can also apply tags to these tables for quicker filtering and selection. The screenshot highlights how the FQN appears in the interface, helping you confidently identify and manage your data assets. Purview Integration with Microsoft Fabric Understanding Fabric Integration: Connect Cross-Tenant: This refers to integrating Microsoft Fabric resources across different Microsoft Entra tenants. It enables organizations to share data, reports, and workloads securely between separate tenants, often used in multi-organization collaborations or partner ecosystems. Key considerations include authentication, data governance, and compliance with cross-tenant policies. Connect In-Same-Tenant: This involves connecting Fabric resources within the same Microsoft Entra tenant. It simplifies integration by leveraging shared identity and governance models, allowing seamless access to data, reports, and pipelines across different workspaces or departments under the same organizational umbrella. Requirements: An Azure account with an active subscription. Create an account for free. An active Microsoft Purview account. Authentication is supported via: Managed Identity. Delegated Authentication and Service Principal. Steps to Register Fabric Tenant Step 1: In the Microsoft Purview portal, navigate to the Data Map section from the left-hand menu. Select Data Sources. Click on Register to begin the process of adding your Fabric Tenant (which also includes PowerBI). Step 2: Add in Data Source Name, keep Tenant ID as default (auto-populated). Microsoft Fabric and Microsoft Purview should be in the same tenant. Step 3: Enter in Scan name, enable/disable scanning for personal workspaces. You will notice under Credentials automatically created identity for authenticating Purview account. Note: If your Purview is behind Private Network, follow the guidelines here: Connect to your Microsoft Fabric tenant in same tenant as Microsoft Purview. Step 4: From your Microsoft Fabric, open Settings, Click on Tenant Settings and enable “Service Principals can access read-only admin APIs”, “Enhanced admin API responses within detailed metadata” and “Enhance Admin API responses with DAX and Mashup Expressions” within Admin API Settings section. Step 5: You will need to create a group, add the Purviews' managed identity to the group and add the group under “Service Principals can access read-only admin APIs” section of your tenant settings inside Microsoft Fabric Step 6: Test your connection and setup scope for your scan. Select the required workspaces, click continue and automate a scan trigger. Step 7: From the left pane, select Data Map and select your data source for your workspace. You can view a list of existing scans on that data source under Recent scans, or you can view all scans on the Scans tab. Review further options here: Manage and Review your Scans. You can review your scanned data sources, history and details here: Navigate to scan run history for a given scan. Why Customers Love Purview Kern County unified its approach to securing and governing data with Microsoft Purview, ensuring consistent compliance and streamlined data management across departments. EY accelerated secure AI development by leveraging the Microsoft Purview SDK, enabling robust data governance and privacy controls for advanced analytics and AI initiatives. Prince William County Public Schools created a more cyber-safe classroom environment with Microsoft Purview, protecting sensitive student information while supporting digital learning. FSA (Food Standards Agency) helps keep the UK food supply safe using Microsoft Purview Records Management, ensuring regulatory compliance and safeguarding critical data assets. Conclusion Purview’s Unified Catalog centralizes governance across Discovery, Catalog Management, and Health Management. The Governance features in Purview allow organizations to confidently answer critical questions: What data do we have? Where did it come from? Who is responsible for it? Is it secure and compliant? Can we trust its quality? Microsoft Purview, when integrated with Azure Databricks and Microsoft Fabric, provides a unified approach to cataloging, classifying, and governing data across diverse environments. By leveraging Purview’s Unified Catalog, Data Map, and advanced governance features, organizations can achieve end-to-end visibility, enforce consistent policies, and improve data quality. You might ask, why does data quality matter? Well, in today’s world, data is the new gold. References Microsoft Purview | Microsoft Learn Pricing - Microsoft Purview | Microsoft Azure Use Microsoft Purview to Govern Microsoft Fabric Connect to and manage Azure Databricks Unity Catalog in Microsoft Purview
- Introducing Microsoft Sentinel graph (Public Preview)Security is being reengineered for the AI era—moving beyond static, rulebound controls and after-the-fact response toward platform-led, machine-speed defense. The challenge is clear: fragmented tools, sprawling signals, and legacy architectures that can’t match the velocity and scale of modern attacks. What’s needed is an AI-ready, data-first foundation—one that turns telemetry into a security graph, standardizes access for agents, and coordinates autonomous actions while keeping humans in command of strategy and high-impact investigations. Security teams already center operations on their SIEM for end-to-end visibility, and we’re advancing that foundation by evolving Microsoft Sentinel into both the SIEM and the platform for agentic defense—connecting analytics and context across ecosystems. And today, we announced the general availability of Sentinel data lake and introduced new preview platform capabilities that are built on Sentinel data lake (Figure 1), so protection accelerates to machine speed while analysts do their best work. We are excited to announce the public preview of Microsoft Sentinel graph, a deeply connected map of your digital estate across endpoints, cloud, email, identity, SaaS apps, and enriched with our threat intelligence. Sentinel graph, a core capability of the Sentinel platform, enables Defenders and Agentic AI to connect the dots and bring deep context quickly, enabling modern defense across pre-breach and post-breach. Starting today, we are delivering new graph-based analytics and interactive visualization capabilities across Microsoft Defender and Microsoft Purview. Attackers think in graphs. For a long time, defenders have been limited to querying and analyzing data in lists forcing them to think in silos. With Sentinel graph, Defenders and AI can quickly reveal relationships, traversable digital paths to understand blast radius, privilege escalation, and anomalies across large, cloud-scale data sets, deriving deep contextual insight across their digital estate, SOC teams and their AI Agents can stay proactive and resilient. With Sentinel graph-powered experiences in Defender and Purview, defenders can now reason over assets, identities, activities, and threat intelligence to accelerate detection, hunting, investigation, and response. Incident graph in Defender. The incident graph in the Microsoft Defender portal is now enriched with ability to analyze blast radius of the active attack. During an incident investigation, the blast radius analysis quickly evaluates and visualizes the vulnerable paths an attacker could take from a compromise entity to a critical asset. This allows SOC teams to effectively prioritize and focus their attack mitigation and response saving critical time and limiting impact. Hunting graph in Defender. Threat hunting often requires connecting disparate pieces of data to uncover hidden paths that attackers exploit to reach your crown jewels. With the new hunting graph, analysts can visually traverse the complex web of relationships between users, devices, and other entities to reveal privileged access paths to critical assets. This graph-powered exploration transforms threat hunting into a proactive mission, enabling SOC teams to surface vulnerabilities and intercept attacks before they gain momentum. This approach shifts security operations from reactive alert handling to proactive threat hunting, enabling teams to identify vulnerabilities and stop attacks before they escalate. Data risk graph in Purview Insider Risk Management (IRM). Investigating data leaks and insider risks is challenging when information is scattered across multiple sources. The data risk graph in IRM offers a unified view across SharePoint and OneDrive, connecting users, assets, and activities. Investigators can see not just what data was leaked, but also the full blast radius of risky user activity. This context helps data security teams triage alerts, understand the impact of incidents, and take targeted actions to prevent future leaks. Data risk graph in Purview Data Security Investigation (DSI). To truly understand a data breach, you need to follow the trail—tracking files and their activities across every tool and source. The data risk graph does this by automatically combining unified audit logs, Entra audit logs, and threat intelligence, providing an invaluable insight. With the power of the data risk graph, data security teams can pinpoint sensitive data access and movement, map potential exfiltration paths, and visualize the users and activities linked to risky files, all in one view. Getting started Microsoft Defender If you already have the Sentinel data lake, the required graph will be auto provisioned when you login into the Defender portal; hunting graph and incident graph experience will appear in the Defender portal. New to data lake? Use the Sentinel data lake onboarding flow to provision the data lake and graph. Microsoft Purview Follow the Sentinel data lake onboarding flow to provision the data lake and graph. In Purview Insider Risk Management (IRM), follow the instructions here. In Purview Data Security Investigation (DSI), follow the instructions here. Reference links Watch Microsoft Secure Microsoft Secure news blog Data lake blog MCP server blog ISV blog Security Store blog Copilot blog Microsoft Sentinel—AI-Powered Cloud SIEM | Microsoft Security
- Conditional Access - Block all M365 apps private Mobile DeviceHello, Ive try to block all private mobile phone from accessing all apps from m365, but it wont work. Im testing it at the moment with one test.user@ I create a CA rule: Cloud Apps Include: All Cloud Apps Exclude: Microsoft Intune Enrollment Exclude: Microsoft Intune Conditions Device Platforms: Include: Android Include: iOS Include: Windows Phone Filter for Devices: Devices matching the rule: Exclude filtered devices from Policy device.deviceOwnership -eq "Company" Client Apps Include: All 4 points Access Controls Block Access ----------------------- I take a fresh "private" installed mobile android phone. Download the Outlook App and log in with the test.user@ in the outlook app and everything work fine. What im doing wrong? Pls help. PeterSolved147Views0likes5Comments
- need to create monitoring queries to track the health status of data connectorsI'm working with Microsoft Sentinel and need to create monitoring queries to track the health status of data connectors. Specifically, I want to: Identify unhealthy or disconnected data connectors, Determine when a data connector last lost connection Get historical connection status information What I'm looking for: A KQL query that can be run in the Sentinel workspace to check connector status OR a PowerShell script/command that can retrieve this information Ideally, something that can be automated for regular monitoring Looking at the SentinelHealth table, but unsure about the exact schema,connector, etc Checking if there are specific tables that track connector status changes Using Azure Resource Graph or management APIs Ive Tried multiple approaches (KQL, PowerShell, Resource Graph) however I somehow cannot get the information I'm looking to obtain. Please assist with this, for example i see this microsoft docs page, https://learn.microsoft.com/en-us/azure/sentinel/monitor-data-connector-health#supported-data-connectors however I would like my query to state data such as - Last ingestion of tables? How much data has been ingested by specific tables and connectors? What connectors are currently connected? The health of my connectors? Please help85Views2likes1Comment