<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>FastTrack for Azure articles</title>
    <link>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/bg-p/FastTrackforAzureBlog</link>
    <description>FastTrack for Azure articles</description>
    <pubDate>Fri, 17 Apr 2026 11:38:51 GMT</pubDate>
    <dc:creator>FastTrackforAzureBlog</dc:creator>
    <dc:date>2026-04-17T11:38:51Z</dc:date>
    <item>
      <title>Modernizing On‑Prem File Servers: Azure Storage Mover and File Sync</title>
      <link>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/modernizing-on-prem-file-servers-azure-storage-mover-and-file/ba-p/4500204</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Azure File Sync&lt;/STRONG&gt; is a hybrid cloud storage service that centralizes on-premises file shares into Azure Files while preserving the experience of a local Windows file server. It installs an agent on Windows Server(s) to cache and sync files to an Azure file share (the cloud backend). Key features include cloud tiering (keeping only hot files on-prem and tiering cold data to Azure) and multi-site synchronization, so changes propagate across servers via the cloud. In essence, Azure File Sync transforms your file server into a cache for Azure, providing continuous two-way sync between on-premises and the Azure file share.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Azure Storage Mover&lt;/STRONG&gt; is a fully managed migration service used to transfer file data into Azure Storage (Azure Blob containers or Azure file shares) with minimal downtime. It works by deploying a migration agent near the source storage, which then copies data directly to Azure. The cloud-based Storage Mover resource orchestrates migrations (including initial bulk transfer and optional delta syncs for changes) across multiple shares from a central interface. Unlike File Sync, Storage Mover is not a continuous sync service but rather a one-directional data mover for scenarios like one-time migrations or periodic updates.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Side-by-Side Technical Comparison&lt;/STRONG&gt;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Aspect&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Azure File Sync&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Azure Storage Mover&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Primary Purpose&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;Hybrid file service&lt;/EM&gt;&lt;/STRONG&gt;&lt;EM&gt;:&lt;/EM&gt; Ongoing two-way sync between on-prem file servers and Azure Files, enabling local caching and multi-site data sharing.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;Migration tool&lt;/EM&gt;&lt;/STRONG&gt;&lt;EM&gt;:&lt;/EM&gt; One-way transfers of file data from on-prem (or other storage) to Azure Storage, optimized for lift-and-shift migrations with minimal downtime.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Architecture&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Agent on Windows Server connects local NTFS volumes to an Azure File Share (cloud endpoint). The Azure Storage Sync Service coordinates sync across servers in a sync group. Supports cloud tiering to offload cold files to cloud. Sync is continuous and multi-directional (all endpoints stay in sync).&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Cloud service + on-prem agent: An Azure Storage Mover resource manages migration jobs. Lightweight migration agents (VMs or containers) run near your sources, sending data directly to the Azure target (Blob or File share). The service orchestrates project-based migrations but does not keep sources in sync after completion.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Supported Sources&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Windows file servers (NTFS), accessed via SMB/NFS protocols (the agent needs Windows Server OS). Ideal for Windows-based file shares.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;SMB shares, NFS exports, and similar file systems on any platform (Windows or Linux NAS). Also supports migrating from other clouds’ storage (e.g., S3 buckets) into Azure. Broad support for heterogeneous sources.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Supported Targets&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Azure Files only. (Cloud endpoint is an Azure file share in a Storage Account). Supports SMB (and NFS 4.1 Azure file shares in preview) as target share types.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Azure Storage (Blob containers or Azure file shares). For example, can migrate into an Azure Blob (ADLS Gen2) container or an Azure File share depending on scenario.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Sync vs. Migration&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Continuous Sync: Bi-directional; changes on-prem or in Azure propagate to all endpoints. Designed for long-term hybrid operation, not just a one-time move.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Batch Migration: One-time or repeated transfer; not a live sync. Typically used to move data entirely to Azure (cutover once done). Supports incremental (delta) syncs to capture changes between migration runs.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Performance &amp;amp; Scale&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Scales to large datasets (tested up to 100 million files per sync group). Throughput can reach hundreds of files/sec for upload/download given sufficient resources. Performance depends on server hardware, network, and Azure Files limits.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Built to handle high-volume migrations (100M+ files). Can scale out by deploying multiple agents or using bigger VMs to increase throughput. Performance mainly limited by network bandwidth and source/target IOPS and can be optimized by parallel jobs and delta sync workflows.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Integration&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Deep integration with Azure Files (the back-end store) and Windows Server. Works with Azure Backup for centralized backups or using file share snapshots. Leverages Azure’s redundancy (LRS/ZRS/GRS) for durability; supports Azure AD DS for identity integration to maintain ACLs in cloud.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Integrated with Azure Arc (for agent management) and can combine with Azure Data Box for hybrid migrations (offline + online phases). Managed using Azure Portal and CLI, provides logging/monitoring through Azure Monitor for migration jobs. No ongoing infrastructure after migration completes.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Advantages of Azure File Sync:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Hybrid Cloud Caching:&lt;/STRONG&gt; Retains on-premises low-latency file access (via local Windows servers) while using Azure as central storage. End users and apps continue using a local file server interface.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Cloud Tiering &amp;amp; Storage Efficiency:&lt;/STRONG&gt; Frees up local storage by tiering infrequently used files to Azure. This reduces on-prem disk usage without sacrificing access to full dataset (files are pulled from cloud on demand).&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Multi-site Sync &amp;amp; Collaboration:&lt;/STRONG&gt; Enables near-real-time sync across multiple servers/sites via Azure hub. Great for distributed teams sharing a common file set, replacing need for complex DFS-R setups or manual transfers.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Minimal Disruption Migration:&lt;/STRONG&gt; Can be used as a no-downtime migration path to Azure Files – sync in background, then cut over clients to the Azure share with identical structure and ACLs.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Advantages of Azure Storage Mover:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Purpose-Built for Migration:&lt;/STRONG&gt; Optimized for transferring data at scale into Azure. Can handle large one-time migrations or scheduled recurrent syncs without continuous agent overhead post-migration. Simplifies multi-terabyte or multi-site migration projects.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Heterogeneous Source Support:&lt;/STRONG&gt; Works with a variety of source types (SMB, NFS shares on any OS) and can migrate into both Azure Files and Azure Blob, providing flexibility that Azure File Sync can’t (e.g., migrating Linux NFS servers or third-party storage to Azure).&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Centralized Orchestration:&lt;/STRONG&gt; Cloud architects can manage all migrations via a single Azure Storage Mover resource – with projects &amp;amp; jobs tracking progress per share. Logging, error handling, and coordination are unified, unlike scripting copy operations per server.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Incremental &amp;amp; Low Downtime:&lt;/STRONG&gt; Supports delta synchronization to bring the target up-to-date after an initial bulk copy. This reduces cutover downtime since only last-minute changes need transferring. Also integrates with offline seeding (Data Box) plus online catch-up to minimize network strain and downtime.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Common Use Cases and When to Choose Each&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Azure File Sync – Use Cases:&lt;/STRONG&gt; Ideal when you need to maintain on-premises file server access while leveraging cloud storage. Some common scenarios:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Branch Office File Sharing:&lt;/STRONG&gt; Multiple offices each have a local file server, all synced to a central Azure Files share. Users get fast local access, and the cloud ensures each site’s data stays consistent.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;File Server Augmentation:&lt;/STRONG&gt; You want to extend an existing Windows file server with virtually unlimited cloud capacity (via tiering) rather than fully moving to cloud. Azure File Sync offloads old data and provides cloud backup, but users and apps continue as normal with the local server.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Gradual Migration / Testing:&lt;/STRONG&gt; You plan to eventually migrate to Azure Files but want a seamless, no-downtime transition. Deploy AFS on the server, let it sync all data to cloud, then optionally eliminate the on-prem server later. This way, users never stop accessing their files during the migration process.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Azure Storage Mover – Use Cases:&lt;/STRONG&gt; Best when you have a defined migration project to move file shares to Azure, especially for heterogeneous environments or large volumes:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Data Center Exit / Large File Share Migration:&lt;/STRONG&gt; Moving tens or hundreds of TBs of data from on-prem NAS or file servers to Azure Storage as a one-time project. The Storage Mover’s robust scalability and delta-sync capabilities help ensure a smooth transfer with minimal final cutover downtime.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Consolidating Cross-Platform Data to Azure:&lt;/STRONG&gt; If you need to migrate non-Windows file systems (Linux NFS, etc.) or even data from other clouds into Azure, Storage Mover supports those sources out-of-the-box. For example, migrating a Linux file repository into Azure Blob for big data analytics.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Recurring Scheduled Migrations:&lt;/STRONG&gt; In cases where you periodically copy data from on-prem to Azure (e.g., monthly exports from a local system to cloud for archiving), Storage Mover can be run as needed and centrally monitored, without maintaining a constant sync infrastructure in between.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Conclusion&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;When to choose which: Use Azure File Sync if your goal is to keep using on-prem servers and need a hybrid solution for the foreseeable future, or if you require distributed caching and continuous sync for collaboration. It’s essentially part of your production architecture for hybrid cloud file storage. Conversely, choose Azure Storage Mover when you want to permanently migrate data into Azure (and possibly decommission on-prem storage), or when dealing with a one-off bulk transfer task. In summary, Azure File Sync is an ongoing hybrid file service, and Azure Storage Mover is a one-time (or scheduled) migration service. Both can complement your cloud strategy, but they address distinct scenarios in a cloud architect’s toolkit.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 08 Mar 2026 21:46:02 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/modernizing-on-prem-file-servers-azure-storage-mover-and-file/ba-p/4500204</guid>
      <dc:creator>SriniThumala</dc:creator>
      <dc:date>2026-03-08T21:46:02Z</dc:date>
    </item>
    <item>
      <title>Azure Firewall and Service Endpoints</title>
      <link>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/azure-firewall-and-service-endpoints/ba-p/4404021</link>
      <description>&lt;P&gt;In my recent blog series &lt;A href="https://blog.cloudtrooper.net/category/privatelink/privatelinkrealitybites/" target="_blank"&gt;Private Link reality bites&lt;/A&gt; I briefly mentioned the possibility of inspecting Service Endpoints with Azure Firewall, and many have asked for more details on that configuration. Here we go!&lt;/P&gt;
&lt;P&gt;First things first: what the heck am I talking about? Most Azure services such as Azure Storage, Azure SQL and many others can be accessed directly over the public Internet. However, there are two alternatives to access those services over Microsoft's backbone: &lt;A href="https://learn.microsoft.com/azure/private-link/private-link-overview" target="_blank"&gt;Private Link&lt;/A&gt; and &lt;A href="https://learn.microsoft.com/azure/virtual-network/virtual-network-service-endpoints-overview" target="_blank"&gt;VNet Service Endpoints&lt;/A&gt;. Microsoft's overall recommendation is using private link, but some organizations prefer leveraging service endpoints. Feel free to read &lt;A href="https://blog.cloudtrooper.net/2025/02/17/private-link-reality-bites-service-endpoints-vs-private-link/" target="_blank"&gt;this post &lt;/A&gt;on a comparison of the two.&lt;/P&gt;
&lt;P&gt;You might want to inspect traffic to Azure services with network firewalls, even if that traffic is leveraging service endpoints. Before doing so, please consider that sending high-bandwidth traffic through a firewall might have cost implications and impact on the overall application latency. If you still want to go ahead, this post is going to explain how to do it.&lt;/P&gt;
&lt;H1&gt;The Design&lt;/H1&gt;
&lt;P&gt;Service endpoints have two configuration parts:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Source subnet configuration to tunnel traffic to the destination service.&lt;/LI&gt;
&lt;LI&gt;Destination service configuration to accept traffic from the source subnet.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The key concept to understand is that if traffic from the client is going to be inspected by a firewall before going to the Azure service, then the source subnet is actually the Azure Firewall's subnet, not the original client's subnet:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;You can configure service endpoints for a specific Azure service on a subnet using the portal, Terraform, Bicep, PowerShell or the Azure CLI. In the portal this is what it looks like:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;For Azure CLI, enabling service endpoints for Azure Storage accounts in all regions would look like this:&lt;/P&gt;
&lt;PRE&gt;❯ subnet_name=AzureFirewallSubnet&lt;BR /&gt;❯ az network vnet subnet update -n $subnet_name --vnet-name $vnet_name -g $rg --service-endpoints Microsoft.Storage.Global -o none --only-show-errors&lt;/PRE&gt;
&lt;P&gt;You would then configure your Azure services to accept traffic coming from the Azure Firewall subnet. For example, for Azure Storage Accounts this is what you would see in the portal:&lt;/P&gt;
&lt;img /&gt;
&lt;H1&gt;Network Rules or Application Rules?&lt;/H1&gt;
&lt;P&gt;Ideally you should use Application Rules in your firewall to make sure that your workloads are accessing the right Azure services, and not exfiltrating data to rogue data services that might be owned by somebody else and still could have the same IP address.&lt;/P&gt;
&lt;P&gt;This is an example of a star rule granting access to all Azure Storage Accounts, but you should specify your own:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;I tested with two storage accounts, one in the same region and another one in a different region than the client (the client being in this case the Azure Firewall). Access to both storage accounts is working, and as you can see both accesses are logged in the Storage Account as well as in the Azure Firewall (I have removed some characters from the storage account names to obfuscate them):&lt;/P&gt;
&lt;PRE&gt;❯ ssh $vm_pip "curl -s4 $storage_blob_fqdn1"&lt;BR /&gt;Hello world! &lt;BR /&gt;❯ ssh $vm_pip "curl -s4 $storage_blob_fqdn2"&lt;BR /&gt;Hello world!&lt;BR /&gt;❯ query='StorageBlobLogs | where TimeGenerated &amp;gt; ago(15m) | project AccountName, StatusCode, CallerIpAddress'&lt;BR /&gt;❯ az monitor log-analytics query -w $logws_customerid --analytics-query $query -o table&lt;BR /&gt;AccountName              CallerIpAddress     StatusCode  &lt;BR /&gt;----------------------   -----------------   ----------&lt;BR /&gt;storagetest????eastus2   10.13.76.72:10066   200&lt;BR /&gt;storagetest????westus2   10.13.76.72:11880   200&lt;BR /&gt;❯ query='AzureDiagnostics | where TimeGenerated &amp;gt; ago(15m) | where Category == "AZFWApplicationRule" | project SourceIP, Fqdn_s, Protocol_s, Action_s' &lt;BR /&gt;❯ az monitor log-analytics query -w $logws_customerid --analytics-query $query -o table &lt;BR /&gt;Action_s  Fqdn_s                                       Protocol_s SourceIP&lt;BR /&gt;--------- -------------------------------------------- ---------- ----------&lt;BR /&gt;Allow     storagetest????eastus2.blob.core.windows.net HTTPS      10.13.76.4 &lt;BR /&gt;Allow     storagetest????westus2.blob.core.windows.net HTTPS      10.13.76.4&lt;/PRE&gt;
&lt;P&gt;The storage account sees as client IP the Azure Firewall's IP (in the subnet 10.13.76.64/26), and the Azure Firewall logs show the actual client IP (in the workload subnet 10.13.76.0/26).&lt;/P&gt;
&lt;P&gt;If you use network rules you would lose a lot of the flexibility of the Azure Firewall, even if using FQDN-based rules. The reason is that if there were 2 storage accounts with different FQDNs sharing the same IP address, and the same client resolves both FQDNs, when Azure Firewall looks at the packet it will not be able to guess to which storage account each specific packet belongs to. From a routing perspective it would still work though, as long as you have the default SNAT settings of Azure Firewall, which involves translating the IP address when public IP addresses are involved:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Conclusion&lt;/H1&gt;
&lt;P&gt;There are some reasons why you might want to pick VNet service endpoints over private link (cost would probably be one of them). If so, there are advantages and disadvantages of sending traffic to Azure services via a firewall. If you decide to inspect traffic to VNet service endpoints with Azure Firewall, hopefully this post has shown you how to do that.&lt;/P&gt;
&lt;P&gt;What are your thoughts about this?&lt;/P&gt;</description>
      <pubDate>Mon, 14 Apr 2025 11:48:33 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/azure-firewall-and-service-endpoints/ba-p/4404021</guid>
      <dc:creator>cloudtrooper</dc:creator>
      <dc:date>2025-04-14T11:48:33Z</dc:date>
    </item>
    <item>
      <title>Azure Private Endpoint vs. Service Endpoint: A Comprehensive Guide</title>
      <link>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/azure-private-endpoint-vs-service-endpoint-a-comprehensive-guide/ba-p/4363095</link>
      <description>&lt;P&gt;When building secure and scalable applications on Microsoft Azure, network connectivity becomes a critical factor. Azure provides two primary methods for enhancing security and connectivity: &lt;STRONG&gt;Private Endpoints&lt;/STRONG&gt; and &lt;STRONG&gt;Service Endpoints&lt;/STRONG&gt;. While both serve to establish secure connections to Azure resources, they function in distinct ways and cater to different networking needs. This blog will explain the differences between the two, their use cases, and when you should use each.&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt; Understanding Service Endpoints&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Azure &lt;STRONG&gt;Service Endpoints&lt;/STRONG&gt; allow you to securely connect to Azure services over an optimized route through the Azure backbone network. When you enable service endpoints on a virtual network, they extend the private IP address space of that virtual network to the service. Essentially, they provide a direct, secure connection to Azure services like &lt;STRONG&gt;Azure Storage&lt;/STRONG&gt;, &lt;STRONG&gt;Azure SQL Database&lt;/STRONG&gt;, and &lt;STRONG&gt;Azure Key Vault&lt;/STRONG&gt; without requiring the traffic to traverse the public internet.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Key Characteristics of Service Endpoints:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Public Services, Private IP&lt;/STRONG&gt;: Service endpoints allow traffic to go through the Azure backbone but still access services using their public IP addresses. However, the traffic is not exposed to the internet.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Network Security Group (NSG) Integration&lt;/STRONG&gt;: Service endpoints can be secured using NSGs, which control access based on source IP addresses and subnet configurations.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;No DNS Resolution&lt;/STRONG&gt;: Service endpoints use public DNS names to route traffic. Thus, the service endpoint enables network traffic to be routed privately but relies on public DNS resolution.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Use Cases for Service Endpoints:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Simplified Security&lt;/STRONG&gt;: Service endpoints are ideal for connecting to Azure services in a straightforward manner without needing complex configurations.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Lower Latency&lt;/STRONG&gt;: Since traffic is routed through the Azure backbone network, there’s less congestion compared to public internet traffic.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Integration with NSG&lt;/STRONG&gt;: Service endpoints allow for tighter security control with Network Security Groups, ensuring only approved subnets and virtual networks can access specific services.&lt;/LI&gt;
&lt;/UL&gt;
&lt;OL start="2"&gt;
&lt;LI&gt;&lt;STRONG&gt; Understanding Private Endpoints&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;STRONG&gt;Private Endpoints&lt;/STRONG&gt;, on the other hand, provide a direct, private connection to Azure resources by assigning a private IP address from your virtual network (VNet) to the service. Unlike service endpoints, which rely on public IPs, private endpoints fully encapsulate the service in a private address space. When a service is accessed via a private endpoint, the connection stays within the Azure network, preventing exposure to the public internet.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Key Characteristics of Private Endpoints:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Private IP Connectivity&lt;/STRONG&gt;: Private endpoints map Azure resources to a private IP in your VNet, ensuring all traffic remains private and not exposed to the internet.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;DNS Resolution&lt;/STRONG&gt;: Private endpoints also require DNS configuration so that the private IP address can be resolved for the associated Azure service. Azure offers automatic DNS resolution for private endpoints, but custom DNS configurations can also be set.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;End-to-End Security&lt;/STRONG&gt;: Since the connection is over a private IP, it adds an additional layer of security by preventing any egress or ingress to public networks.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Use Cases for Private Endpoints:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Critical Security&lt;/STRONG&gt;: Private endpoints are perfect for applications requiring high security, such as those handling sensitive data, financial transactions, or proprietary business logic.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Strict Regulatory Compliance&lt;/STRONG&gt;: If you are dealing with highly regulated industries, private endpoints provide a way to ensure your data is not exposed to the public internet.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Network Isolation&lt;/STRONG&gt;: Private endpoints are suited for scenarios where you want to fully isolate your Azure resources from the internet and only allow access from within your VNet.&lt;/LI&gt;
&lt;/UL&gt;
&lt;OL start="3"&gt;
&lt;LI&gt;&lt;STRONG&gt; Key Differences: Private Endpoint vs. Service Endpoint&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Feature&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Private Endpoint&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Service Endpoint&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Connection Type&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Uses a private IP address from your VNet&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Uses a public IP address but traffic stays within the region&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;DNS Resolution&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Requires DNS configuration to resolve private IPs&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Relies on public DNS for resolution&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Use Case&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Ideal for critical security and isolated traffic&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Best for connecting to Azure services with basic security requirements&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Supported Services&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Limited to resources that support private endpoints&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Supports a broader range of Azure services like Storage, SQL, etc.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;Note: &lt;/STRONG&gt;Service endpoints are as secured as the private endpoints when configured appropriately.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;OL start="4"&gt;
&lt;LI&gt;&lt;STRONG&gt; When to Use Each Option&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Choose Service Endpoints if&lt;/STRONG&gt;:&lt;/LI&gt;
&lt;UL&gt;
&lt;LI&gt;You want to connect to Azure services like Storage, SQL, or Key Vault using the Azure backbone network.&lt;/LI&gt;
&lt;LI&gt;Your security requirements do not mandate complete isolation from the public internet.&lt;/LI&gt;
&lt;LI&gt;You need to leverage Network Security Groups (NSGs) to limit access from specific subnets or VNets.&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Choose Private Endpoints if&lt;/STRONG&gt;:&lt;/LI&gt;
&lt;UL&gt;
&lt;LI&gt;Your application requires full isolation from the public internet, such as for sensitive workloads or highly regulated data.&lt;/LI&gt;
&lt;LI&gt;You want traffic to flow entirely within the private network, ensuring complete confidentiality.&lt;/LI&gt;
&lt;LI&gt;You need to maintain strict security standards for applications that interact with services like databases, storage accounts, or other critical infrastructure.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;OL start="5"&gt;
&lt;LI&gt;&lt;STRONG&gt; Conclusion&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Both &lt;STRONG&gt;Private Endpoints&lt;/STRONG&gt; and &lt;STRONG&gt;Service Endpoints&lt;/STRONG&gt; play vital roles in securing connectivity to Azure services, but they cater to different security needs. &lt;STRONG&gt;Service Endpoints&lt;/STRONG&gt; offer an easier, simpler way to secure access over the Azure backbone, while &lt;STRONG&gt;Private Endpoints&lt;/STRONG&gt; provide complete isolation and enhanced security by assigning a private IP address.&lt;/P&gt;
&lt;P&gt;By carefully assessing your application's security needs and performance requirements, you can choose the appropriate method to ensure optimal connectivity and compliance with Azure services.&lt;/P&gt;</description>
      <pubDate>Fri, 31 Oct 2025 19:47:42 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/azure-private-endpoint-vs-service-endpoint-a-comprehensive-guide/ba-p/4363095</guid>
      <dc:creator>SriniThumala</dc:creator>
      <dc:date>2025-10-31T19:47:42Z</dc:date>
    </item>
    <item>
      <title>FastTrack for Azure (FTA) program retiring December 2024</title>
      <link>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/fasttrack-for-azure-fta-program-retiring-december-2024/ba-p/4357383</link>
      <description>&lt;P&gt;&lt;STRONG&gt;ATTENTION: &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;As of December 31st, 2024, the FastTrack for Azure (FTA) program will be retired.&amp;nbsp; FTA will support any projects currently in motion to ensure successful completion by December 31st, 2024, but will no longer accept new nominations.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-teams="true"&gt;For more information on available programs and resources, visit: &lt;A href="https://azure.microsoft.com/en-us/solutions/migration/migrate-modernize-innovate/" aria-label="Link Azure Migrate, Modernize, and Innovate | Microsoft Azure" target="_blank"&gt;Azure Migrate, Modernize, and Innovate | Microsoft Azure&lt;/A&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 13 Dec 2024 20:21:12 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/fasttrack-for-azure-fta-program-retiring-december-2024/ba-p/4357383</guid>
      <dc:creator>JillArmourMicrosoft</dc:creator>
      <dc:date>2024-12-13T20:21:12Z</dc:date>
    </item>
    <item>
      <title>Azure Backup vs. Azure Site Recovery: Key Differences Explained</title>
      <link>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/azure-backup-vs-azure-site-recovery-key-differences-explained/ba-p/4356084</link>
      <description>&lt;P&gt;When it comes to safeguarding your data and ensuring business continuity, Microsoft Azure offers two powerful solutions: &lt;STRONG&gt;Azure Backup&lt;/STRONG&gt; and &lt;STRONG&gt;Azure Site Recovery (ASR)&lt;/STRONG&gt;. Although both services are critical components of a comprehensive disaster recovery strategy, they serve distinct purposes. Based on my experience working with the hundreds of Customers sometimes they are not sure to use which services or their use cases. In some cases, Customers need both services to meet their business requirements.&lt;/P&gt;
&lt;P&gt;Here's a breakdown of their key differences:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt; Purpose and Functionality&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/backup/" target="_blank"&gt;&lt;STRONG&gt;Azure Backup&lt;/STRONG&gt;&lt;/A&gt;: This service focuses on &lt;STRONG&gt;data backup and restoration&lt;/STRONG&gt;. It provides a simple, secure, and reliable way to back up files, folders, applications, and virtual machines (VMs) to Azure. Azure Backup protects against data loss due to accidental deletion, ransomware attacks, or corruption.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/site-recovery/azure-to-azure-tutorial-enable-replication" target="_blank"&gt;&lt;STRONG&gt;Azure Site Recovery (ASR)&lt;/STRONG&gt;:&lt;/A&gt; ASR is designed for &lt;STRONG&gt;disaster recovery and business continuity&lt;/STRONG&gt;. It replicates workloads running on physical or virtual machines to a secondary location to ensure seamless failover during a disaster.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Use Case&lt;/EM&gt;:&lt;/STRONG&gt; Azure Backup is ideal for long-term data retention, whereas ASR is critical for&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;minimizing downtime and ensuring workload availability during outages.&lt;/P&gt;
&lt;OL start="2"&gt;
&lt;LI&gt;&lt;STRONG&gt; Core Capabilities&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure Backup&lt;/STRONG&gt;:
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/backup/backup-overview#why-use-azure-backup" target="_blank"&gt;Creates backups&lt;/A&gt; for Azure VMs, On-prem VMs, Azure Managed Disks, Azure file shares, SQL server in Azure VMs, SAP HANA databases in Azure VMs, Azure Blobs, Azure Kubernetes services and Azure Database for PostgreSQL servers.&lt;/LI&gt;
&lt;LI&gt;Supports both on-premises and cloud-based resources.&lt;/LI&gt;
&lt;LI&gt;Provides long-term retention and lifecycle management for backups.&lt;/LI&gt;
&lt;LI&gt;Offers encryption at rest and in transit to secure data.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure Site Recovery&lt;/STRONG&gt;:
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/site-recovery/site-recovery-overview#what-can-i-replicate" target="_blank"&gt;Replicate&lt;/A&gt; Azure VMs, On-premises VMs and VMWare VMs.&lt;/LI&gt;
&lt;LI&gt;Continuous replication of workloads for low recovery point objectives (RPOs).&lt;/LI&gt;
&lt;LI&gt;Orchestrated failover and failback capabilities.&lt;/LI&gt;
&lt;LI&gt;Multi-region disaster recovery for VMs and physical servers.&lt;/LI&gt;
&lt;LI&gt;Integration with BCDR (Business Continuity and Disaster Recovery) plans.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; Key Differentiator&lt;/EM&gt;:&lt;/STRONG&gt; Azure Backup is about &lt;STRONG&gt;data recovery&lt;/STRONG&gt;, while ASR is about&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;STRONG&gt;workload continuity&lt;/STRONG&gt;.&lt;/P&gt;
&lt;OL start="3"&gt;
&lt;LI&gt;&lt;STRONG&gt; Recovery Objectives&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure Backup&lt;/STRONG&gt;: Focuses on the &lt;STRONG&gt;Recovery Time Objective (RTO)&lt;/STRONG&gt; and &lt;STRONG&gt;Recovery Point Objective (RPO)&lt;/STRONG&gt; for restoring individual files or entire systems. RPO depends on the backup schedule (e.g., daily or hourly backups).&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure Site Recovery&lt;/STRONG&gt;: Aims to minimize &lt;STRONG&gt;downtime&lt;/STRONG&gt; by ensuring applications and workloads are quickly available in a secondary location during an outage. It delivers lower RTO and RPO compared to backup solutions.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;OL start="4"&gt;
&lt;LI&gt;&lt;STRONG&gt; Data Recovery vs. Workload Recovery&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure Backup: &lt;/STRONG&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/backup/backup-azure-arm-restore-vms" target="_blank"&gt;Restores&lt;/A&gt; data at a granular level (e.g., files, folders, or entire systems).&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure Site Recovery: &lt;/STRONG&gt;Ensures entire workloads, including infrastructure and applications, are replicated and can be failed over to another location.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;OL start="5"&gt;
&lt;LI&gt;&lt;STRONG&gt; Cost&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;A href="https://azure.microsoft.com/en-us/pricing/details/backup/?ef_id=_k_CjwKCAiA6t-6BhA3EiwAltRFGDTWVA0Mb4GB9jOUPQxseM0_NSaAZzrGD-ZNiGIDZ-JVk69ya2FQhRoCpzEQAvD_BwE_k_&amp;amp;OCID=AIDcmm5edswduu_SEM__k_CjwKCAiA6t-6BhA3EiwAltRFGDTWVA0Mb4GB9jOUPQxseM0_NSaAZzrGD-ZNiGIDZ-JVk69ya2FQhRoCpzEQAvD_BwE_k_&amp;amp;gad_source=1&amp;amp;gclid=CjwKCAiA6t-6BhA3EiwAltRFGDTWVA0Mb4GB9jOUPQxseM0_NSaAZzrGD-ZNiGIDZ-JVk69ya2FQhRoCpzEQAvD_BwE" target="_blank"&gt;Azure Backup&lt;/A&gt;&lt;/STRONG&gt;: Costs are primarily based on the size of backed-up data and the number of recovery points stored in the Recovery Services Vault.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;A href="https://azure.microsoft.com/en-us/pricing/details/site-recovery/?ef_id=_k_CjwKCAiA6t-6BhA3EiwAltRFGCln7JoaLarswI9g0pPvMmrgQ5fA2lUJ4vZlR2I_JA7rIth0TXsf7BoChG4QAvD_BwE_k_&amp;amp;OCID=AIDcmm5edswduu_SEM__k_CjwKCAiA6t-6BhA3EiwAltRFGCln7JoaLarswI9g0pPvMmrgQ5fA2lUJ4vZlR2I_JA7rIth0TXsf7BoChG4QAvD_BwE_k_&amp;amp;gad_source=1&amp;amp;gclid=CjwKCAiA6t-6BhA3EiwAltRFGCln7JoaLarswI9g0pPvMmrgQ5fA2lUJ4vZlR2I_JA7rIth0TXsf7BoChG4QAvD_BwE" target="_blank"&gt;Azure Site Recovery&lt;/A&gt;&lt;/STRONG&gt;: Pricing is driven by the number of instances being replicated and the storage consumed by replicated data.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Comparison&lt;/EM&gt;:&lt;/STRONG&gt; Azure Backup is generally more cost-effective for data protection,&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;whereas ASR justifies its higher cost by providing enterprise-grade disaster recovery&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; capabilities.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Final Thoughts&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Azure Backup and Azure Site Recovery are complementary solutions that address different aspects of data protection and disaster recovery. For long-term data retention and restoration, Azure Backup is the go-to solution. For mission-critical applications requiring business continuity during disruptions, Azure Site Recovery ensures workloads remain operational with minimal downtime.&lt;/P&gt;
&lt;P&gt;A robust IT strategy often involves leveraging both services to cover the spectrum of data protection and recovery needs, ensuring business resilience no matter the scenario.&lt;/P&gt;</description>
      <pubDate>Tue, 10 Dec 2024 20:16:17 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/azure-backup-vs-azure-site-recovery-key-differences-explained/ba-p/4356084</guid>
      <dc:creator>SriniThumala</dc:creator>
      <dc:date>2024-12-10T20:16:17Z</dc:date>
    </item>
    <item>
      <title>Oracle Database@Azure DNS Setup</title>
      <link>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/oracle-database-azure-dns-setup/ba-p/4304513</link>
      <description>&lt;H1&gt;&lt;STRONG&gt;Introduction&lt;/STRONG&gt;&lt;/H1&gt;
&lt;P&gt;The following article describes considerations and recommendations for setting up Domain Name Service (DNS) when deploying Oracle Database@Azure. The goal of this article is to provide recommended technical guidance to enhance customer experience for a stable cloud environment. The article assumes that the reader has a basic understanding of Oracle Database technologies and Azure Compute and networking. As parts of your preparation and architecture design process refer to the following article for more information, see &lt;A class="lia-external-url" href="https://github.com/AnthonyDelagarde/architecture-center-pr/blob/anthdela/azure/cloud-adoption-framework/ready/azure-best-practices/dns-for-on-premises-and-azure-resources" target="_blank" rel="noopener"&gt;DNS for on-premises and Azure - Cloud Adoption Framework&lt;/A&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Scenario&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;You want to migrate your on-premises Oracle databases to Oracle Database@Azure. During the migration, you need to ensure proper name resolution both on-premises and in Azure. Oracle Data Guard will be used for the data migration. After migration, you aim to maintain reliable name resolution using your hybrid connection without a custom DNS zone with applications that Oracle Database@Azure will communicate with.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;DNS Deployment options for Oracle Database@Azure&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;The option that can be set up to satisfy the mentioned&amp;nbsp;&lt;STRONG&gt;scenario&lt;/STRONG&gt; and requirements would be to extend DNS infrastructure from on-premises into Azure with an IaaS (Infrastructure as a Service) solution.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;DNS prerequisites before you deploy Oracle Database@Azure in Azure&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Review the different DNS deployment scenario workflows with Oracle &lt;A class="lia-external-url" href="mailto:Database@Azure" target="_blank" rel="noopener"&gt;Database@Azure&lt;/A&gt;&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;Oracle Database@Azure default DNS setup workflow&lt;/STRONG&gt;&lt;/H5&gt;
&lt;img /&gt;
&lt;H5&gt;&lt;STRONG&gt;Oracle Database@Azure custom DNS setup workflow&lt;/STRONG&gt;&lt;/H5&gt;
&lt;img /&gt;
&lt;H5&gt;&lt;STRONG&gt;Oracle Database@Azure external DNS setup workflow&lt;/STRONG&gt;&lt;/H5&gt;
&lt;img /&gt;
&lt;UL&gt;
&lt;LI&gt;If you don't select to use a custom DNS domain, the Oracle Exadata uses the default domain, oraclevcn.com. The oraclvcn.com zone is auto provisioned in Azure as a private DNS zone linked to the Oracle Database@Azure virtual network. After Oracle Database@Azure is deployed, records from OCI begin to auto populate the oraclevcn.com private DNS zone in Azure.&lt;/LI&gt;
&lt;LI&gt;Private view and private zone must be created before deploying the Exadata Cluster if you plan to use a custom DNS zone such as contoso.com with Oracle Database@Azure. Read this article for details &lt;A class="lia-external-url" href="https://docs.oracle.com/iaas/Content/Network/Concepts/dns-topic-Private-resolver.htm" target="_blank" rel="noopener"&gt;Private DNS Resolvers&lt;/A&gt;. The private resolver created in the OCI Private view is not authoritative. It is only applied for Oracle Database@Azure for name resolution external to the OCI VCN compartment.&lt;/LI&gt;
&lt;LI&gt;Forwarding to another DNS server should be set up beforehand in the OCI DNS console if your deployment scenario requires this. For details, see &lt;A class="lia-external-url" href="https://docs.oracle.com/iaas/Content/Network/Concepts/dns.htm" target="_blank" rel="noopener"&gt;DNS in your Virtual Cloud Network.&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Private zone's name can't have more than four labels. For example, a.b.c.d (oradb.prod.contoso.com) is allowed while a.b.c.d.e (oradb.sales.prod.contoso.com) isn't. This constraint is only when using a custom DNS zone such as contoso.com to provision the database cluster. Review the following Oracle article &lt;A class="lia-external-url" href="https://docs.oracle.com/iaas/Content/Network/Concepts/dns.htm" target="_blank" rel="noopener"&gt;DNS in your Virtual Cloud Network.&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;When provisioning an Exadata VM Cluster using Private DNS feature, Exadata needs to create reverse DNS zones in the compartment of Exadata virtual machine (VM) Cluster. If the compartment defines tags or tag defaults, other policies related to managing tags are needed. For details, see:&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://docs.oracle.com/iaas/Content/Tagging/Tasks/managingtagsandtagnamespaces.htm#Who" target="_blank" rel="noopener"&gt;Required Permissions for Working with Defined Tags&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://docs.oracle.com/iaas/Content/Tagging/Tasks/managingtagdefaults.htm#requriedIAM" target="_blank" rel="noopener"&gt;Required Permissions for Working with Tag Defaults&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H5&gt;&lt;STRONG&gt;Custom DNS&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;Let us address the scenario, which is an Oracle database migration from on-premises to Azure. There are various tools and methods to migrate the Oracle database such as Oracle Data Guard as an example. Configuration details of Oracle Data Guard are outside the scope of this article.&lt;/P&gt;
&lt;P&gt;Before configuring Oracle Data Guard for data replication from an on-premises database to Azure, it’s essential to ensure the on-premises database can resolve the FQDN of the Oracle Database@Azure instance. The Oracle Database@Azure cluster’s client network advertises three IPs, which are linked to a Single Client Access Name (SCAN) address. This SCAN address provides service access to clients connecting to Oracle Database@Azure and offers redundancy for the cluster. In the event of a node failure within the cluster, the SCAN FQDN maintains uninterrupted client connections during the failover process as the new primary node becomes active. This setup negates the necessity for manual DNS round-robin configurations, which were common in traditional on-premises Oracle database deployments.&lt;/P&gt;
&lt;P&gt;Proper resolution of the SCAN FQDN is a prerequisite for establishing direct connectivity from the on-premises database to the Oracle Database@Azure instance in Azure. The name resolution from Oracle Database@Azure in Azure to resources in the datacenter would have to function. This connection could be facilitated through Azure ExpressRoute or a high-bandwidth VPN. Additionally, it’s crucial to confirm that Oracle Database@Azure can resolve the on-premises Oracle database using the custom DNS servers deployed in Azure. Once name resolution and network connectivity are assured, Oracle Data Guard can be configured to initiate data migration using the Oracle Database@Azure SCAN FQDN on TCP (Transmission Control Protocol) port 1521. The data migration can begin and once completed the eventual cutover to Azure from the legacy Oracle database.&lt;/P&gt;
&lt;P&gt;The following diagram represents the implementation of a DNS solution with VMs (Virtual Machines) or appliances from the Azure marketplace with the network path that DNS name resolution would follow across your Azure Landing Zone (ALZ).&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;The VMs can be either Windows, Linux, FreeBSD, and active directory domain controllers with integrated DNS for name resolution. These systems in Azure would be integrated into the overall DNS infrastructure of your organization. The SCAN FQDN, on-premises databases, and on-premises legacy applications would be registered on DNS servers deployed across the enterprise.&lt;/P&gt;
&lt;P&gt;Once the migration is complete applications like NFS shares, web servers, or an AKS (Azure Kubernetes Service) cluster in either Azure or the on-premises datacenter receives a client request initiating a DNS query. The DNS server returns the response of the SCAN FQDN associated with the Oracle Database@Azure cluster back to the application that initiated the query. This facilitates the connection to the database for either reading or writing operations.&lt;/P&gt;
&lt;P&gt;You can query the SCAN FQDN from the oraclevcn.com DNS zone or apply your domain and naming convention in your DNS servers as a CNAME record. An example on how to create a CNAME in a Windows DNS server is in this article &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/windows-server/networking/core-network-guide/cncg/server-certs/create-an-alias-cname-record-in-dns-for-web1#to-add-an-alias-cname-resource-record-to-a-zone" target="_blank" rel="noopener"&gt;To add an alias (CNAME) resource record in a DNS zone.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;If the application resides on Azure, it queries the DNS servers deployed in the custom DNS servers defined in the Azure virtual network. The on-premises applications would use the on-premises DNS servers defined in the datacenter.&lt;/P&gt;
&lt;P&gt;Review the following recommendations to ensure this implementation supports both reliability and redundancy.&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;A DNS solution on IaaS deployed in Azure must have the Azure Wire Server 168.63.129.16 set as a conditional forwarder for proper name resolution with Azure Private DNS zones. Virtual network links needs to connect the Azure Private DNS zone with the virtual network that has the IaaS DNS resources. This includes zones such as oraclevcn.com and other private zones supporting other Azure services. Details about the Azure Wire Server are in this article &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-network/what-is-ip-address-168-63-129-16" target="_blank" rel="noopener"&gt;What is IP 168.63.129.16&lt;/A&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-network/what-is-ip-address-168-63-129-16" target="_blank" rel="noopener"&gt;What is IP 168.63.129.16.&amp;nbsp;&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Virtual network links need to be created for each Azure Private DNS zone and connected to the virtual network where the DNS server VMs or the DNS appliance resides for proper name resolution to work.&lt;/LI&gt;
&lt;LI&gt;Deploy at a minimum of two VMs in the same virtual network and same Azure region. Use a VMSS (Virtual Machine Scale Sets) Flex for redundancy and configure DNS settings on virtual networks to use the IPs of the custom DNS servers. Read recommendations for redundancy in this article &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/well-architected/reliability/redundancy" target="_blank" rel="noopener"&gt;Recommendations for designing redundancy&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Please look for updates to this article and design patterns on Microsoft Learn and Microsoft Architecture Center.&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Next Steps&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/oracle/oracle-db/onboard-oracle-database" target="_blank" rel="noopener"&gt;Onboard Oracle Database@Azure&lt;/A&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/oracle/oracle-db/oracle-database-network-plan" target="_blank" rel="noopener"&gt;Network Planning for Oracle Database@Azure&lt;/A&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/oracle/oracle-db/oracle-database-groups-roles" target="_blank" rel="noopener"&gt;Groups and Roles for Oracle Database@Azure &lt;/A&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/oracle/oracle-db/provision-oracle-database" target="_blank" rel="noopener"&gt;Overview of Provisioning&lt;/A&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 19 Nov 2024 15:47:53 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/oracle-database-azure-dns-setup/ba-p/4304513</guid>
      <dc:creator>adelagarde</dc:creator>
      <dc:date>2024-11-19T15:47:53Z</dc:date>
    </item>
    <item>
      <title>Connecting to Azure SQL Database using SQLAlchemy and Microsoft Entra authentication</title>
      <link>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/connecting-to-azure-sql-database-using-sqlalchemy-and-microsoft/ba-p/4259772</link>
      <description>&lt;P&gt;In this blog, we will focus on a common solution that demonstrates how to securely connect to an Azure SQL Database using &lt;STRONG&gt;Microsoft Entra Authentication &lt;/STRONG&gt;with the current logged in user. It leverages the &lt;STRONG&gt;SQLAlchemy&lt;/STRONG&gt; library for Python, integrating Entra's secure identity framework with your database connection.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;U&gt;Key Steps:&lt;/U&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;EM&gt;Set Current User as Admin&lt;/EM&gt;: You begin by configuring an Azure Entra account as the admin for the Azure SQL Server.&lt;/LI&gt;
&lt;LI&gt;&lt;EM&gt;Configure Firewall Rules&lt;/EM&gt;: Ensure that your machine or application has access by adding its IP address to the Azure SQL Server firewall.&lt;/LI&gt;
&lt;LI&gt;&lt;EM&gt;Create Secure Connection&lt;/EM&gt;: Finally, the Python &lt;STRONG&gt;SQLAlchemy&lt;/STRONG&gt;&amp;nbsp;library is used to connect to the database, relying on Microsoft Entra authentication instead of hard-coded credentials.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;With this setup, we achieve a secure, credential-less connection to Azure SQL Database!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Comparing Azure SQL Authentication Methods&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Before diving into the solution, let's compare authentication methods. When it comes to securing access to your Azure SQL Database, the method you choose for authentication can significantly impact both the security and manageability of your applications. There are two primary methods commonly used: SQL Authentication, which relies on username and password credentials, and Microsoft Entra Managed Identity, which utilizes Microsoft Entra ID (formally Azure AD) for identity and access management.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;SQL Authentication Drawbacks&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;SQL Authentication, while straightforward, comes with inherent security risks and management burdens. One of the main concerns is the reliance on hard-coded or stored credentials, often passed through connection strings in application code or configuration files. Additionally, using the stored static credentials allows continued access until explicitly revoked, enlarging your database's attack surface. For example, when using SQL Authentication, developers might include connection credentials like this:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;connection_string = "Driver={SQL Server};Server=tcp:yourserver.database.windows.net,1433;Database=yourdb;Uid=yourusername;Pwd=yourpassword;"&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this example, embedding the username and password in the application introduces several vulnerabilities:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;EM&gt;Credential Exposure&lt;/EM&gt;: If the codebase is shared, leaked, or compromised, database credentials can be exposed.&lt;/LI&gt;
&lt;LI&gt;&lt;EM&gt;Secret Management&lt;/EM&gt;: You need solutions like Azure Key Vault to securely store and rotate credentials, adding complexity.&lt;/LI&gt;
&lt;LI&gt;&lt;EM&gt;Credential Rotations&lt;/EM&gt;: SQL credentials require manual or automated rotation, increasing operational overhead.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Improved Security with Microsoft Entra authentication&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Microsoft Entra authentication&lt;/STRONG&gt; (formerly known as Azure AD) offers a more secure and manageable way to authenticate applications and users to Azure SQL Database. Instead of relying on stored credentials, Microsoft Entra uses tokens generated dynamically and securely by Azure's identity management system, eliminating the need for static credentials in your applications or configuration files.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;U&gt;Key Security Advantages:&lt;/U&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;EM&gt;Credential-less Access&lt;/EM&gt;: No need to store or transmit sensitive credentials (username and password) in code or configuration files.&lt;/LI&gt;
&lt;LI&gt;&lt;EM&gt;Time-Limited Access&lt;/EM&gt;: Entra-generated tokens have limited lifetimes, reducing the risk of misuse or unauthorized access over extended periods.&lt;/LI&gt;
&lt;LI&gt;&lt;EM&gt;Centralized Management&lt;/EM&gt;: Entra integrates seamlessly with other Azure services, providing centralized identity and access control across your applications.&lt;/LI&gt;
&lt;LI&gt;&lt;EM&gt;Role-Based Access Control (RBAC)&lt;/EM&gt;: By using Entra authentication, access can be more finely tuned using RBAC, meaning users only get the permissions they need to perform their tasks.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;In contrast to SQL Authentication, which requires manually revoking credentials, Microsoft Entra authentication ensures that when access to an account is revoked, it immediately affects all Azure services, preventing further unauthorized access. This vastly reduces the risk of security breaches due to stale credentials lingering in code repositories or configuration files.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Pre-requisites&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;An&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/free/python/" target="_blank" rel="noopener" data-linktype="external"&gt;Azure subscription&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;An Azure SQL database configured with Microsoft Entra authentication. You can create one using the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/single-database-create-quickstart?view=azuresql" target="_blank" rel="noopener" data-linktype="relative-path"&gt;Create database quickstart&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;The latest version of the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/cli/azure/get-started-with-azure-cli" target="_blank" rel="noopener" data-linktype="absolute-path"&gt;Azure CLI&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;Visual Studio Code with the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://marketplace.visualstudio.com/items?itemName=ms-python.python" target="_blank" rel="noopener" data-linktype="external"&gt;Python extension&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;Python 3.8 or later.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/connect/odbc/download-odbc-driver-for-sql-server" target="_self"&gt;ODBC Driver for SQL Server&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Configure the Database&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Setting Current User as Azure SQL DB Admin&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;First, you need to set your current Azure AD user as the Azure SQL Admin for your database. Follow the steps below:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Navigate to Your Azure SQL Server:
&lt;UL&gt;
&lt;LI&gt;Log in to the &lt;A href="https://portal.azure.com" target="_self"&gt;Azure Portal&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;Search for and select your &lt;STRONG&gt;Azure SQL Server&lt;/STRONG&gt;&amp;nbsp;(not the individual database).&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Set Azure AD Admin:
&lt;UL&gt;
&lt;LI&gt;In the left-hand menu, under &lt;STRONG&gt;Settings&lt;/STRONG&gt;, click on &lt;STRONG&gt;Microsoft Entra ID&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;Select &lt;STRONG&gt;Support Only Microsoft Entra authentication for this server&lt;/STRONG&gt;&amp;nbsp;to ensures no one can access the database server using SQL login credentials.&lt;/LI&gt;
&lt;LI&gt;Click on &lt;STRONG&gt;Set admin&lt;/STRONG&gt;.
&lt;UL&gt;
&lt;LI&gt;In the Add admin pane, search for your user account.&lt;/LI&gt;
&lt;LI&gt;Select your account and click&amp;nbsp;&lt;STRONG&gt;Select&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;This will set your user as a database admin and allow it to login using Microsoft Entra authentication.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Click on &lt;STRONG&gt;Save&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-indent-padding-left-120px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Adding Your IP Address to the Azure SQL Server Firewall&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To ensure your connection to Azure SQL Database is secure and allowed, you will need to add your IP address to the server's firewall rules. This step prevents unauthorized IPs from accessing your server while allowing your trusted IP to connect. Follow these steps:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Navigate to Your Azure SQL Server:
&lt;UL&gt;
&lt;LI&gt;Log in to the &lt;A href="https://portal.azure.com" target="_self"&gt;Azure Portal&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;Search for and select your &lt;STRONG&gt;Azure SQL Server&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Configure Firewall Settings:
&lt;UL&gt;
&lt;LI&gt;In the left-hand menu under &lt;STRONG&gt;Security&lt;/STRONG&gt;, select &lt;STRONG&gt;Networking&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;In the &lt;STRONG&gt;Public network access&lt;/STRONG&gt;&amp;nbsp;section, enable &lt;STRONG&gt;Selected networks&lt;/STRONG&gt; to allow the firewall rule in order to whitelist your IP address.&lt;/LI&gt;
&lt;LI&gt;Under the &lt;STRONG&gt;Firewall rules&lt;/STRONG&gt; section, click on &lt;STRONG&gt;Add your client IPv4 address&lt;/STRONG&gt;. This will automatically detect your current IP address and add it to the list of allowed addresses.&lt;/LI&gt;
&lt;LI&gt;Click on &lt;STRONG&gt;Allow Azure services and resources to access this server&lt;/STRONG&gt;. This will allow your web app running on Azure to access the database.&lt;/LI&gt;
&lt;LI&gt;Click on &lt;STRONG&gt;Save&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-indent-padding-left-120px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;At this point, we have set up an Azure AD user as the admin for the Azure SQL Server, enforcing Entra ID (formerly Azure AD) authentication and eliminating the need for SQL login credentials. This reduces the risk of credential exposure while streamlining identity management. We also added your IP to the Azure SQL Server firewall whitelist, ensuring only authorized IP addresses can connect, minimizing exposure to external threats.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With these security measures in place, we are ready to securely connect and interact with the Azure SQL Database using Python, leveraging Microsoft for seamless, credential-free authentication.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Set up the project&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now that the database setup is complete, we are ready to implement and use the code that will interact with the database. We will be using &lt;A href="https://www.sqlalchemy.org/" target="_self"&gt;SQLAlchemy&lt;/A&gt;, which provides many database capabilities for python developers, like ORM capabilities and connection pooling.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;SPAN&gt;1. Open Visual Studio Code and create a new folder for your project and change directory into it.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;mkdir python-sql-azure
cd python-sql-azure&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;2.&amp;nbsp;Create a &lt;STRONG&gt;requirements.txt&lt;/STRONG&gt; file with the following content:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;pyodbc
fastapi
uvicorn[standard]
pydantic
azure-identity
sqlalchemy&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;3. Create a start.sh file (this is only needed if you plan to deploy this project to azure)&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;gunicorn -w 4 -k uvicorn.workers.UvicornWorker app:app&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;4. Create an &lt;STRONG&gt;app.py&lt;/STRONG&gt; file with the content below:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;import struct
import urllib
from typing import Union, Optional
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
import sqlalchemy as db
from sqlalchemy import String, select, event
from sqlalchemy.orm import Session, Mapped, mapped_column
from sqlalchemy.ext.declarative import declarative_base
from azure.identity import DefaultAzureCredential


driver_name = '{ODBC Driver 18 for SQL Server}'
server_name = 'sql-fg-database-s4ujd'
database_name = 'sqldb-fg-database-s4ujd'

connection_string = 'Driver={};Server=tcp:{}.database.windows.net,1433;Database={};Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30'.format(driver_name, server_name, database_name)

Base = declarative_base()
credential = DefaultAzureCredential()


class PersonSchema(BaseModel):
    first_name: str
    last_name: Union[str, None] = None


class Person(Base):
    __tablename__ = "Persons"

    id: Mapped[int] = mapped_column(primary_key=True, name="ID")
    first_name: Mapped[str] = mapped_column(String(30), name="FirstName")
    last_name: Mapped[Optional[str]] = mapped_column(name="LastName")

    def __repr__(self) -&amp;gt; str:
        return f"Person(ID={self.id!r}, FirstName={self.first_name!r}, LastName={self.last_name!r})"


def get_engine():
    params = urllib.parse.quote(connection_string)
    url = "mssql+pyodbc:///?odbc_connect={0}".format(params)
    return db.create_engine(url)


engine = get_engine()


# from https://docs.sqlalchemy.org/en/20/core/engines.html#generating-dynamic-authentication-tokens
@event.listens_for(engine, "do_connect")
def provide_token(dialect, conn_rec, cargs, cparams):
    """
        Called before the engine creates a new connection. Injects an EntraID token into the connection parameters.
    """
    print('creating new token')

    token_bytes = credential.get_token("https://database.windows.net/.default").token.encode("UTF-16-LE")
    token_struct = struct.pack(f'&amp;lt;I{len(token_bytes)}s', len(token_bytes), token_bytes)
    SQL_COPT_SS_ACCESS_TOKEN = 1256  # This connection option is defined by microsoft in msodbcsql.h

    cparams["attrs_before"] = {SQL_COPT_SS_ACCESS_TOKEN: token_struct}


# set up the database
Base.metadata.create_all(engine)


app = FastAPI()


@app.get("/all")
def get_persons():
    with Session(engine) as session:
        stmt = select(Person)

        rows = []
        for person in session.scalars(stmt):
            print(person.id, person.first_name, person.last_name)
            rows.append(f"{person.id}, {person.first_name}, {person.last_name}")

        return rows


@app.get("/person/{person_id}")
def get_person(person_id: int):
    with Session(engine) as session:
        stmt = select(Person).where(Person.id == person_id)
        person = session.execute(stmt).scalar()

        if not person:
            raise HTTPException(status_code=404, detail="Person not found")

        return f"{person.id}, {person.first_name}, {person.last_name}"


@app.post("/person")
def create_person(item: PersonSchema):
    with Session(engine) as session:
        person = Person(first_name=item.first_name, last_name=item.last_name)
        session.add(person)
        session.commit()

    return item
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Notes&lt;/STRONG&gt;:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;make sure to update the &lt;STRONG&gt;server_name&lt;/STRONG&gt;&amp;nbsp;and &lt;STRONG&gt;database_name&lt;/STRONG&gt;&amp;nbsp;variables in the code above with the names you used to create both the SQL server and the database&lt;/LI&gt;
&lt;LI&gt;The &lt;STRONG&gt;provide_token&lt;/STRONG&gt; method will be called every time a database connection is created by the engine. It's responsible for injecting the EntraID token so it can successfully authenticate to the database. This is necessary in order to always have a fresh token when creating a connection, otherwise if we had a static token that was already expired, it would never be able to connect again to the database.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Running Locally&lt;/H2&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;1. Create a virtual environment for the app&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;py -m venv .venv
.venv\scripts\activate&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;2. Install requirements&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;pip install -r requirements.txt&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;3.&amp;nbsp;Run the &lt;STRONG&gt;app.py&lt;/STRONG&gt; file in Visual Studio Code.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;uvicorn app:app --reload&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;4.&amp;nbsp;Open the Swagger UI at &lt;A href="http://127.0.0.1:8000/docs" target="_blank" rel="noopener"&gt;http://127.0.0.1:8000/docs&lt;/A&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;5. Create a new user using the &lt;STRONG&gt;Create Person&amp;nbsp;&lt;/STRONG&gt;endpoint&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;6. Try &lt;STRONG&gt;Get Person&amp;nbsp;&lt;/STRONG&gt;and &lt;STRONG&gt;Get Persons&amp;nbsp;&lt;/STRONG&gt;endpoints&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Running on Azure&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;1.&amp;nbsp;&lt;SPAN&gt;Use the&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/cli/azure/webapp#az-webapp-up" target="_blank" rel="noopener" data-linktype="absolute-path"&gt;az webapp up&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;to deploy the code to App Service.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;az webapp up --resource-group &amp;lt;resource-group-name&amp;gt; --name &amp;lt;web-app-name&amp;gt;   &lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;2.&amp;nbsp;&lt;SPAN&gt;Use the&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/cli/azure/webapp/config#az-webapp-config-set" target="_blank" rel="noopener" data-linktype="absolute-path"&gt;az webapp config set&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;command to configure App Service to use the&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;EM&gt;start.sh&lt;/EM&gt;&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;file.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;az webapp config set --resource-group &amp;lt;resource-group-name&amp;gt; --name &amp;lt;web-app-name&amp;gt; --startup-file start.sh&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;SPAN&gt;3.&amp;nbsp;Use the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/cli/azure/webapp/identity#az-webapp-identity-assign" target="_blank" rel="noopener" data-linktype="absolute-path"&gt;az webapp identity assign&lt;/A&gt;&amp;nbsp;command to enable a system-assigned managed identity for the App Service. This is needed because we will grant database access to this identity, with specific roles.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;az webapp identity assign --resource-group &amp;lt;resource-group-name&amp;gt; --name &amp;lt;web-app-name&amp;gt;&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;4. Grant permissions to the web app identity by running the SQL commands below on your database. The first commanda creates a database user for the web app and the following ones sets data reader/writer roles (you can find more details about roles at &lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/security/authentication-access/database-level-roles?view=sql-server-ver16" target="_blank" rel="noopener"&gt;Database-level roles - SQL Server | Microsoft Learn&lt;/A&gt;). By doing this we guarantee that the web app has the least privilege.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;CREATE USER [&amp;lt;web-app-name&amp;gt;] FROM EXTERNAL PROVIDER
ALTER ROLE db_datareader ADD MEMBER [&amp;lt;web-app-name&amp;gt;]
ALTER ROLE db_datawriter ADD MEMBER [&amp;lt;web-app-name&amp;gt;]&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;5. Open the Swagger UI at&amp;nbsp;&lt;SPAN&gt;&lt;A href="https://&amp;lt;web-app-name&amp;gt;.azurewebsites.net/docs" target="_blank" rel="noopener"&gt;https://&amp;lt;web-app-name&amp;gt;.azurewebsites.net/docs&lt;/A&gt;&amp;nbsp;and test the endpoints again&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN&gt;References&lt;/SPAN&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/azure-sql-python-quickstart?view=azuresql&amp;amp;tabs=windows%2Csql-auth" target="_blank" rel="noopener"&gt;Connect to and query Azure SQL Database using Python and the pyodbc library - Azure SQL Database | Microsoft Learn&lt;/A&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Thu, 31 Oct 2024 16:02:44 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/connecting-to-azure-sql-database-using-sqlalchemy-and-microsoft/ba-p/4259772</guid>
      <dc:creator>franklinlindemberg</dc:creator>
      <dc:date>2024-10-31T16:02:44Z</dc:date>
    </item>
    <item>
      <title>Securing Your Data Pipelines: Best Practices for Fabric Data Factory</title>
      <link>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/securing-your-data-pipelines-best-practices-for-fabric-data/ba-p/4262672</link>
      <description>&lt;ARTICLE class="w-full text-token-text-primary focus-visible:outline-2 focus-visible:outline-offset-[-4px]" dir="auto" data-testid="conversation-turn-3" data-scroll-anchor="false"&gt;
&lt;DIV class="text-base py-[18px] px-3 md:px-4 m-auto w-full md:px-5 lg:px-4 xl:px-5"&gt;
&lt;DIV class="mx-auto flex flex-1 gap-4 text-base md:gap-5 lg:gap-6 md:max-w-3xl lg:max-w-[40rem] xl:max-w-[48rem]"&gt;
&lt;DIV class="group/conversation-turn relative flex w-full min-w-0 flex-col agent-turn"&gt;
&lt;DIV class="flex-col gap-1 md:gap-3"&gt;
&lt;DIV class="flex max-w-full flex-col flex-grow"&gt;
&lt;DIV class="min-h-8 text-message flex w-full flex-col items-end gap-2 whitespace-normal break-words [.text-message+&amp;amp;]:mt-5" dir="auto" data-message-author-role="assistant" data-message-id="018b778d-1ddd-4478-beff-75be7d534a7a"&gt;
&lt;DIV class="flex w-full flex-col gap-1 empty:hidden first:pt-[3px]"&gt;
&lt;DIV&gt;&lt;STRONG&gt;&lt;FONT size="5" color="#7928A1"&gt;Introduction&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/DIV&gt;
&lt;DIV class="markdown prose w-full break-words dark:prose-invert light"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/ARTICLE&gt;
&lt;ARTICLE class="w-full text-token-text-primary focus-visible:outline-2 focus-visible:outline-offset-[-4px]" dir="auto" data-testid="conversation-turn-5" data-scroll-anchor="true"&gt;
&lt;DIV class="text-base py-[18px] px-3 md:px-4 m-auto w-full md:px-5 lg:px-4 xl:px-5"&gt;
&lt;DIV class="mx-auto flex flex-1 gap-4 text-base md:gap-5 lg:gap-6 md:max-w-3xl lg:max-w-[40rem] xl:max-w-[48rem]"&gt;
&lt;DIV class="group/conversation-turn relative flex w-full min-w-0 flex-col agent-turn"&gt;
&lt;DIV class="flex-col gap-1 md:gap-3"&gt;
&lt;DIV class="flex max-w-full flex-col flex-grow"&gt;
&lt;DIV class="min-h-8 text-message flex w-full flex-col items-end gap-2 whitespace-normal break-words [.text-message+&amp;amp;]:mt-5" dir="auto" data-message-author-role="assistant" data-message-id="2f009e5a-1660-4b5a-acd1-52651b0f61b3"&gt;
&lt;DIV class="flex w-full flex-col gap-1 empty:hidden first:pt-[3px]"&gt;
&lt;DIV class="markdown prose w-full break-words dark:prose-invert light"&gt;
&lt;P&gt;In today’s data-driven world, securing data pipelines is crucial to protect sensitive information and ensure compliance with regulatory requirements. Microsoft Fabric Data Factory experience (FDF) offers a robust set of security features to safeguard your data as it moves through various stages of your data workflows. In this post, we’ll explore key security features in FDF and demonstrate how to implement them with a practical example.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;FONT size="5" color="#7928A1"&gt;&lt;STRONG&gt;Key Security Features in Fabric Data Factory&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/H3&gt;
&lt;P&gt;Before diving into the implementation, let’s take a look at the primary security mechanisms that Fabric Data Factory provides:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Authentication&lt;/STRONG&gt;: &lt;SPAN&gt;Fabric data factory uses on Microsoft Entra ID to authenticate users (or service principals). When authenticated, users receive&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/entra/identity-platform/access-tokens" target="_blank" rel="noopener" data-linktype="absolute-path"&gt;access tokens&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;from Microsoft Entra ID. Fabric uses these tokens to perform operations in the context of the user.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;&lt;STRONG&gt;Authorization:&amp;nbsp;&lt;/STRONG&gt;All Fabric permissions are stored centrally by the metadata platform. Fabric services query the metadata platform on demand in order to retrieve authorization information and to authorize and validate user requests.&lt;BR /&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Data Encryption&lt;/STRONG&gt;: &lt;BR /&gt;&amp;nbsp;Data at rest:&amp;nbsp;&lt;SPAN&gt;All Fabric data stores are encrypted at rest by using Microsoft-managed keys. Fabric data includes customer data as well as system data and metadata.&lt;/SPAN&gt;&lt;BR /&gt;Data at transit:&amp;nbsp;&lt;SPAN&gt;Data in transit between Microsoft services is always encrypted with at least TLS 1.2. Fabric negotiates to TLS 1.3 whenever possible. Traffic between Microsoft services always routes over the&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/networking/microsoft-global-network" target="_blank" rel="noopener" data-linktype="absolute-path"&gt;Microsoft global network&lt;/A&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Managed Identities: A&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;Fabric workspace identity is an automatically managed service principal that can be associated with a Fabric workspace. Fabric workspaces with a workspace identity can securely read or write to firewall-enabled Azure Data Lake Storage Gen2 accounts through&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/security/security-trusted-workspace-access" target="_blank" rel="noopener" data-linktype="relative-path"&gt;trusted workspace access&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;for OneLake shortcuts. Fabric items can use the identity when connecting to resources that support Microsoft Entra authentication. Fabric uses workspace identities to obtain Microsoft Entra tokens without the customer having to manage any credentials.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Key Vault Integration&lt;/STRONG&gt;: &lt;SPAN&gt;unfortunately, as of today Key vault integration is&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;not supported&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;in Dataflow Gen 2 / data pipeline connections in Fabric.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Network Security&lt;/STRONG&gt;:&amp;nbsp;&amp;nbsp;&lt;SPAN&gt;When you connect to Pipeline via private link, you can use the data pipeline to load data from any data source with public endpoints into a private-link-enabled Microsoft Fabric Lakehouse. Customers can also author and operationalize data pipelines with activities, including Notebook and Dataflow activities, using the private link. However, copying data from and into a Data Warehouse isn't currently possible when Fabric's private link is enabled.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Now, let’s walk through an example that demonstrates how to secure data pipeline in Fabric Data Factory (FDF).&lt;/P&gt;
&lt;HR /&gt;
&lt;H3&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;&lt;FONT color="#7928A1"&gt;Example: Securing a Data Pipeline in Fabric Data Factory&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/H3&gt;
&lt;H4&gt;&lt;FONT color="#7928A1"&gt;&lt;STRONG&gt;Scenario:&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/H4&gt;
&lt;P&gt;You are setting up a data pipeline that moves sensitive data from ADLS gen 2 to Fabric warehouse. To ensure that this pipeline is secure, you will:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;FONT color="#7928A1"&gt; &lt;A href="#community--1-managedidentity" target="_self"&gt;Use a managed identity to authenticate Fabric data factory &lt;/A&gt;&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;FONT color="#7928A1"&gt; &lt;A href="#community--1-adlsconfig" target="_self"&gt;Configure trusted workspace access in ADLS Gen2 &lt;/A&gt;&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;FONT color="#7928A1"&gt; &lt;A href="#community--1-copydata" target="_self"&gt;Copy data from ADLS to Fabric Lakehouse using secured pipeline &lt;/A&gt;&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4&gt;&amp;nbsp;&lt;FONT size="4" color="#7928A1"&gt;&lt;STRONG&gt;Prerequisites:&lt;/STRONG&gt;&lt;/FONT&gt;&lt;BR /&gt;- &lt;STRONG&gt;Tools and Technologies Needed:&lt;/STRONG&gt;&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;Azure Data Lake Gen2 (ADLS) storage account.&lt;/LI&gt;
&lt;LI&gt;knowledge in Azure data factory.&lt;/LI&gt;
&lt;LI&gt;Fabric workspace.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;A target="_blank" name="managedidentity"&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;&lt;FONT size="4" color="#7928A1"&gt;&lt;STRONG&gt;Steps:&lt;BR /&gt;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;BR /&gt;Step 1: Enable Managed Identity in Workspace level for Fabric Data Factory pipeline&lt;/H4&gt;
&lt;P&gt;Workspace identity can be&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/security/workspace-identity#create-and-manage-a-workspace-identity" target="_blank" rel="noopener" data-linktype="self-bookmark"&gt;created and deleted by workspace admins&lt;/A&gt;. The workspace identity has the workspace contributor role on the workspace.&lt;/P&gt;
&lt;P&gt;Workspace identity is supported for authentication to target resources in connections. Only users with an admin, member, or contributor role in the workspace can configure the workspace identity for authentication in connections.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Managed identities allow Fabric Data Factory to securely authenticate to other Azure services without hardcoding credentials.&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Navigate to the workspace and open the workspace settings.&lt;/LI&gt;
&lt;LI&gt;Select the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;Workspace identity&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;tab.&lt;/LI&gt;
&lt;LI&gt;Select the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;+ Workspace identity&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;button.&lt;BR /&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Once enabled, this identity can be used to access resources like Azure SQL Database securely.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A target="_blank" name="adlsconfig"&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Step 2:&amp;nbsp;&lt;SPAN&gt;Configure trusted workspace access in ADLS Gen2&lt;/SPAN&gt;&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;Sign in to the Azure portal and go to&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;Custom deployment&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;Choose&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG style="font-family: inherit;"&gt;Build your own template in the editor&lt;/STRONG&gt;&lt;SPAN&gt;. For a sample ARM template that creates a resource instance rule, see&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A style="font-family: inherit; background-color: #ffffff;" href="https://learn.microsoft.com/en-us/fabric/security/security-trusted-workspace-access#arm-template-sample" target="_blank" rel="noopener" data-linktype="self-bookmark"&gt;ARM template sample&lt;/A&gt;&lt;SPAN&gt;&lt;SPAN&gt;.&lt;BR /&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;Create the resource instance rule in the editor. When done, choose&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG style="font-family: inherit;"&gt;Review + Create&lt;/STRONG&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;On the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG style="font-family: inherit;"&gt;Basics&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;tab that appears, specify the required project and instance details. When done, choose&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG style="font-family: inherit;"&gt;Review + Create&lt;/STRONG&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;On the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG style="font-family: inherit;"&gt;Review + Create&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;tab that appears, review the summary and then select&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG style="font-family: inherit;"&gt;Create&lt;/STRONG&gt;&lt;SPAN&gt;. The rule will be submitted for deployment.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;When deployment is complete, you'll be able to go to the resource.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A target="_blank" name="copydata"&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Step 3: Create a pipeline to connect to ADLS gen2 and copy data to Fabric Lakehouse&lt;/H4&gt;
&lt;P&gt;with this pipeline, we will connect directly to a firewall-enabled ADLS Gen2 account that has trusted workspace access enabled.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Navigate to your workspace, click on new item&lt;/LI&gt;
&lt;LI&gt;Create new pipeline&lt;BR /&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;In your pipeline, add Copy activity to the canvas&lt;BR /&gt;&lt;img /&gt;&lt;/LI&gt;
&lt;LI&gt;In Copy activity Source Tab : Choose ADLS Gen2 as data source and connect to it&amp;nbsp;&lt;BR /&gt;&lt;img /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;In Destination tab, Connect to the lakehouse and select table&amp;nbsp;&lt;BR /&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4 id="toc-hId-853948988"&gt;Step 4: Results&lt;/H4&gt;
&lt;H4&gt;After copy activity finish running, you can view the data in your Lakehouse&lt;/H4&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;BR /&gt;&lt;FONT color="#7928A1"&gt;&lt;STRONG&gt;&lt;FONT size="5"&gt;Conclusion&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/H3&gt;
&lt;P&gt;- Securing your data pipelines in Azure Data Factory is essential to maintaining the integrity, confidentiality, and availability of your data. By leveraging features such as managed identities you can build a robust security framework around your data flows.&lt;/P&gt;
&lt;P&gt;Do you have any other tips for securing Fabric Data Factory? Let me know in the comments!&lt;/P&gt;
&lt;P&gt;-&amp;nbsp;&lt;SPAN&gt;&amp;nbsp;Follow me on LinkedIn:&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.linkedin.com/in/sally-dabbah/" target="_self" rel="nofollow noopener noreferrer"&gt;Sally Dabbah | LinkedIn&lt;/A&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/ARTICLE&gt;</description>
      <pubDate>Tue, 15 Oct 2024 08:16:49 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/securing-your-data-pipelines-best-practices-for-fabric-data/ba-p/4262672</guid>
      <dc:creator>Sally_Dabbah</dc:creator>
      <dc:date>2024-10-15T08:16:49Z</dc:date>
    </item>
    <item>
      <title>Making Searching and Curating Data Assets in Microsoft Purview easier.</title>
      <link>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/making-searching-and-curating-data-assets-in-microsoft-purview/ba-p/4236187</link>
      <description>&lt;P&gt;&lt;STRONG&gt;&lt;SPAN&gt;1. Introduction.&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Currently, IT infrastructure stores and maintains data assets, even though IT doesn't own or use the data.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;There's a disconnect between how data needs to be discovered and maintained within the business, and the teams that maintain it.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Without standardized procedures for data governance, data handling often relies on manual processes, leading to inefficiencies, data loss, insufficient data protection and higher operational costs.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN&gt;Microsoft Purview&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt; is designed to help enterprises get the most value from their existing information assets.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN&gt;catalog&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt; makes data sources easily discoverable and understandable by the users who manage the data.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;With Purview, organizations can gain insights into &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN&gt;data lineage, data usage, and data connections&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt;, helping them to comply with regulations such as GDPR, CCPA, and others.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Microsoft Purview provides a cloud-based service into which you can &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN&gt;register data sources&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt;. During registration, the data remains in its existing location, but a copy of its metadata is added to Microsoft Purview, along with a reference to the data source location.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;After you register a data source, you can scan it and enrich its metadata.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Discovering and understanding data sources and their use is the primary purpose of registering the sources.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;In this article, we describe smart features that allow you to search previously scanned data assets using &lt;/SPAN&gt;&lt;STRONG&gt;&lt;I&gt;&lt;SPAN&gt;natural language queries&lt;/SPAN&gt;&lt;/I&gt;&lt;/STRONG&gt;&lt;SPAN&gt;, along with &lt;/SPAN&gt;&lt;STRONG&gt;&lt;I&gt;&lt;SPAN&gt;automated metadata enrichment&lt;/SPAN&gt;&lt;/I&gt;&lt;/STRONG&gt;&lt;SPAN&gt; for curating these assets.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN&gt;2. Data Catalog.&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN&gt;Data Catalog&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt; is a Data Governance solution that enables business experts and technical data owners to collaborate and contribute to a shared understanding of data.&amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Among other functionalities, in Data Catalog you can use &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN&gt;data search&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt; to discover data assets from multiple data sources, including Microsoft Fabric items and workspaces &lt;/SPAN&gt;&lt;A href="https://techcommunity.microsoft.com/t5/fasttrack-for-azure/exploring-the-relationship-between-microsoft-fabric-and/ba-p/4159159" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Exploring the Relationship Between Microsoft Fabric and Microsoft Purview: What You Need to Know&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;I&gt;&lt;SPAN&gt;Microsoft Purview’s Smart Data Searching&lt;/SPAN&gt;&lt;/I&gt;&lt;SPAN&gt; primarily works with scanned data assets. For unscanned data assets, manual classification and tagging can be done, but they may not fully leverage the capabilities of Smart Data Searching. For real-time or “Live View” data, you would typically need to scan the data source first to make it searchable in Microsoft Purview.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN&gt;2.1 Smart Data Search.&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Once the metadata is ingested into the Microsoft Purview Data Map, it can be searched using Microsoft Purview’s Smart Data Searching in the Data Catalog.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Now you can use &lt;/SPAN&gt;&lt;I&gt;&lt;SPAN&gt;natural language description&lt;/SPAN&gt;&lt;/I&gt;&lt;SPAN&gt; for data assets searching in the Microsoft Purview Catalog.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Go to the new Microsoft Purview Portal: &lt;/SPAN&gt;&lt;A href="https://purview.microsoft.com/" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;https://purview.microsoft.com&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Select the &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN&gt;Data Catalog&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt; solution and then, &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN&gt;Data Search&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Once you enter your search, Microsoft Purview returns a list of data assets and glossary terms that match the entered keywords, provided the user has data reader permissions for them.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;In the example below, the search phrase was “I want to know the data related with diseases in Latam”.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;You should know that the search returns all data assets in the collection(s) that best match the query. If a collection contains data assets that match the phrase, all scanned items are returned, even if some items do not match exactly.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The correspondence between search results and desired results depends on your Data Map design, the registered data sources, and the scope applied in scanning, which helps narrow down the most common searches in your business.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;See for example a &lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/purview/concept-best-practices-collections#example-4-multiregion-business-functions" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;multiregional and business concepts as a design for your Data Map&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The Microsoft Purview relevance engine sorts through all the matches and ranks them based on what it believes their usefulness is to a user.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Many factors determine an asset’s relevance score, and the Microsoft Purview search team is constantly tuning the relevance engine to ensure the top search results have value to you.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN&gt;2.2 Browse by applying filters.&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Once you entered the search phrase and wait for a few seconds, you can see a Filters Pane where you can apply the following filtering criteria:&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN&gt;Asset Type&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Data Source Type&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Collection&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Classification&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Contact&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Endorsement&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Assigned Term&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Sensitivity Label&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Rating&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;Next figure shows the Asset Type filtering:&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;Filtering by &lt;/SPAN&gt;&lt;SPAN class=""&gt;“&lt;/SPAN&gt;&lt;SPAN class=""&gt;Data&lt;/SPAN&gt;&lt;SPAN class=""&gt;”&lt;/SPAN&gt;&lt;SPAN class=""&gt;,&lt;/SPAN&gt;&lt;SPAN class=""&gt; you can refine your search selecting one or more data&lt;/SPAN&gt;&lt;SPAN class=""&gt; asset&lt;/SPAN&gt;&lt;SPAN class=""&gt; types according to your referred data source:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;Next figure shows the assets of type “Report”:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;Then select any filter category you would like to narrow your results &lt;/SPAN&gt;&lt;SPAN class=""&gt;by and&lt;/SPAN&gt;&lt;SPAN class=""&gt; select any values you would like to narrow results to. For some filters, you can select the ellipses to choose between an AND condition or an OR condition&lt;/SPAN&gt;&lt;SPAN class=""&gt;, as next figure shows:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN&gt;2.3 &lt;/SPAN&gt;&lt;/STRONG&gt;&lt;STRONG&gt;&lt;SPAN&gt;Curation process.&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The process of &lt;/SPAN&gt;&lt;SPAN&gt;contributing to the catalog by tagging, documenting, and annotating data sources that have already been registered is known as &lt;/SPAN&gt;&lt;STRONG&gt;&lt;I&gt;&lt;SPAN&gt;metadata curation&lt;/SPAN&gt;&lt;/I&gt;&lt;/STRONG&gt;&lt;SPAN&gt; [&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/purview/metadata-curation" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Metadata curation in Microsoft Purview | Microsoft Learn&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;]&lt;/SPAN&gt;&lt;SPAN&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The curation process is facilitated by selecting one or more data assets returned in the search that are assumed to be the curator's responsibility.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;For example, in the next figure we show two selected data assets:&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;By clicking on “View selected,” you can access a screen to start adding attributes to the data asset's metadata.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;Click on “Bulk edit”:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Selecting an attribute, you can add new values, replace an existing value with another one or remove values.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;You can add as many attributes as you need.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Depending on the attribute, you can manage the proper values.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;Purview’s &lt;/SPAN&gt;&lt;SPAN class=""&gt;Copilot can help enrich metadata by suggesting &lt;/SPAN&gt;&lt;SPAN class=""&gt;additional&lt;/SPAN&gt;&lt;SPAN class=""&gt; context, classifications, and annotations based on the data asset's content and usage&lt;/SPAN&gt;&lt;SPAN class=""&gt;, as well as &lt;/SPAN&gt;&lt;SPAN class=""&gt;can ensure consistency in metadata definitions and standards across the organization, reducing discrepancies and improving data quality.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Selecting “Suggestions”, you can observe many derived suggestions based on your business concepts.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;You should know that AI models use general internet knowledge base data so it will not return company specific or custom definitions or terms. All terms should be stewarded before being published to ensure that the term and definition aligns to company use and the specific knowledge about the term. &lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/purview/microsoft-purview-data-catalog-responsible-ai-faq" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Microsoft Purview Data Catalog Responsible AI FAQ (Preview) | Microsoft Learn&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Any terms and definitions provided via suggestions should be reviewed and aligned with the company's specific language standards. When a term is selected and created, it will be in draft status, allowing the steward to complete and finalize the term before deciding to publish it.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN&gt;Conclusions.&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Smart data searching and automated metadata enrichment significantly enhance the cataloging process, making it more efficient, comprehensive, and insightful.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;These advanced capabilities not only improve data discoverability and governance but also empower users with richer, more contextualized information, leading to more informed and effective decision-making. &lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN&gt;Learn more:&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/purview/concept-best-practices-collections" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Microsoft Purview collections architecture and best practices | Microsoft Learn&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/purview/concept-scans-and-ingestion" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Scans and ingestion | Microsoft Learn&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/purview/how-to-search-catalog" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;How to search the Data Catalog | Microsoft Learn&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.youtube.com/watch?v=iOcrzFbo4Uc" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Discover data with natural language search - YouTube&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/purview/metadata-curation" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Metadata curation in Microsoft Purview | Microsoft Learn&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/purview/concept-best-practices-annotating-data" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Best practices for describing data in Microsoft Purview | Microsoft Learn&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/purview/how-to-create-manage-glossary-terms" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;How to create and manage glossary terms (Preview) | Microsoft Learn&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.youtube.com/watch?v=4EsxbnnEAvU&amp;amp;t=87s" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Curate your data with Business Concepts (youtube.com)&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/purview/how-to-manage-data-catalog-access-policies" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;How to configure and manage data catalog access policies (Preview) | Microsoft Learn&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 30 Sep 2024 04:32:33 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/making-searching-and-curating-data-assets-in-microsoft-purview/ba-p/4236187</guid>
      <dc:creator>Eduardo_Noriega</dc:creator>
      <dc:date>2024-09-30T04:32:33Z</dc:date>
    </item>
    <item>
      <title>Implementing Route Summarization in Azure VMware Solution</title>
      <link>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/implementing-route-summarization-in-azure-vmware-solution/ba-p/4248155</link>
      <description>&lt;H3&gt;&lt;SPAN style="font-size: x-large;"&gt;&lt;STRONG&gt;What is Route Summarization?&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;Route summarization, also known as route aggregation, is a technique used in networking to combine multiple routes into a single, summarized route. This helps reduce the size of routing tables and simplifies the routing process.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;SPAN style="font-size: x-large;"&gt;&lt;STRONG&gt;Why Use Route Summarization in Azure VMware Solution?&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;Route summarization for Azure VMware Solution (AVS) is essential in the following scenario:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Route Tables with a 400 UDR Route Limitation&lt;/STRONG&gt;: If you need to direct AVS workload segments through a Network Virtual Appliance (NVA) like Azure Firewall or a third-party firewall, you must create a User-Defined Route (UDR) for each AVS segment individually. This can quickly become cumbersome if your AVS environment has over 400 segments, as there is a 400 UDR route limitation per Route Table.&lt;/P&gt;
&lt;H3&gt;&lt;SPAN style="font-size: x-large;"&gt;&lt;STRONG&gt;Route Summarization in NSX&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;In AVS, NSX provides network virtualization to create and manage virtual networks and security. Additionally, you can set up route summarization directly within NSX. NSX consists of two gateway routers: a Tier-1 and a Tier-0. The Tier-0 gateway connects to external networks and can summarize routes before advertising them to the physical network, thereby propagating the summarized routes back to Azure and on-premises. However, since&amp;nbsp;Azure VMware Solution is a managed service, customers do not have Read/Write NSX permissions to modify configurations on the Tier-0 gateway. Therefore, any route summarization must be performed at the Tier-1 gateway level.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you have contiguous Workload Segments connected to your NSX Tier-1 gateway, summarization becomes more straightforward. Otherwise, ensure that all summary routes comprehensively cover your Workload Segments to avoid any segments from losing connectivity. To enable route summarization, we need to suppress AVS from advertising specific routes and only advertise the summarized route. Therefore, it’s crucial that all summary routes cover all workload segments to prevent any loss of connectivity.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="color: #df0000;"&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt; When using the Tier-1 gateway for summarization, only Workload Segments can be summarized; the AVS /22 Management address cannot be summarized. However, with the Virtual WAN Route Maps feature (still in Public Preview at the time of this writing), you will be able to summarize both the /22 Management address block and Workload Segments. Once the Virtual WAN Route Maps feature becomes generally available, I will explore this topic further in a future blog post.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;SPAN style="font-size: x-large;"&gt;&lt;STRONG&gt;Scenario Overview&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;Using the topology illustrated below, I will guide you through the step-by-step process of deploying summarization from the NSX T1 gateway. In my scenario, I have a Virtual WAN Hub deployed, which includes an ExpressRoute Gateway. This gateway in the Hub-VNet connects to both the Azure VMware Solution (AVS) and On-Premises environments. Additionally, the Hub has a VNet peering to a Spoke VNet. There is also a Global Reach connection between AVS and On-Prem, ensuring connectivity between the two.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="color: #df0000;"&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt; While my example utilizes VWAN, the summarization steps and behavior remain consistent with those of a traditional hub-and-spoke topology.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In AVS, there are four workload segments. Each local segment in NSX is configured as a /24 subnet and is connected to the same Tier-1 gateway.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Segment 1: 192.168.100.0/24&lt;BR /&gt;Segment 2: 192.168.101.0/24&lt;BR /&gt;Segment 3: 192.168.102.0/24&lt;BR /&gt;Segment 4: 192.168.103.0/24&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The goal is to stop advertising these four specific routes to both Azure and on-premises networks. Instead, we’ll only advertise the summary route 192.168.100.0/22, which covers all four segments.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="color: #df0000;"&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt;&amp;nbsp;Route Summarization should not contain networks that are extended using HCX.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;SPAN style="font-size: x-large;"&gt;&lt;STRONG&gt;Before configuring Route Summarization&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;As indicated by the blue arrows, the four routes listed below are being advertised from AVS to the VWAN Hub ExpressRoute Gateway. These routes are propagated to both the VWAN Hub and the Spoke VNet. Additionally, the four routes are advertised to on-premises via Global Reach.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-size: large;"&gt;&lt;STRONG&gt;VWAN Hub Effective Routes before summarization&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;As highlighted below, I am currently learning the /24 routes on the VWAN Effective Routes from AVS.&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-size: x-large;"&gt;&lt;STRONG&gt;Summarization Steps&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;1. Log into NSX and navigate to&amp;nbsp;&lt;STRONG&gt;Networking &amp;gt; Tier-1 Gateways&lt;/STRONG&gt;. Locate your Tier-1 Gateway where all your workload segments are connected. Click on the three dots (circled in red) and select&amp;nbsp;&lt;STRONG&gt;Edit&lt;/STRONG&gt;.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV id="tinyMceEditorjasonmedina_1" class="mceNonEditable lia-copypaste-placeholder"&gt;&lt;/DIV&gt;
&lt;P&gt;2. Scroll down and expand the&amp;nbsp;&lt;STRONG&gt;Route Advertisement&lt;/STRONG&gt;&amp;nbsp;section.&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; Click the icon next to&amp;nbsp;&lt;STRONG&gt;Set Route Advertisement Rules&lt;/STRONG&gt;&amp;nbsp;(circled in red).&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;3.&amp;nbsp;Click &lt;STRONG&gt;Add Route Advertisement Rule&lt;/STRONG&gt;&lt;BR /&gt;&amp;nbsp; &amp;nbsp;Create a name for your summary route. In my example, I used “Summary-Route.”&lt;BR /&gt;&amp;nbsp; &amp;nbsp;Add the summary route you want to advertise under Subnets. I used 192.168.100.0/22. Make sure to hit enter after typing in your summary route so it appears circled in blue as shown in the diagram.&lt;BR /&gt;&amp;nbsp; Click Add then click Save.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;4.&amp;nbsp;Under the&amp;nbsp;&lt;STRONG&gt;T1 Route Advertisement&lt;/STRONG&gt;&amp;nbsp;section, disable&amp;nbsp;&lt;STRONG&gt;All Connected Segments &amp;amp; Service Ports&lt;/STRONG&gt;&amp;nbsp;as illustrated in the diagram below (circled in red).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="color: #df0000;"&gt;&lt;STRONG&gt;IMPORTANT:&lt;/STRONG&gt;&amp;nbsp;Ensure all your connected segments are included in your summary route(s). Any connected segment not covered by a summary route will lose connectivity.&amp;nbsp;For example, a summary route of 192.168.100.0/22 covers segments 192.168.100.0/24 to 192.168.103.0/24. If an additional segment is configured as 192.168.104.0/24, it would not be covered by the 192.168.100.0/22 summary route. Since specific workload segments are suppressed and only summary routes are advertised, the 192.168.104.0/24 segment would lose connectivity unless a summary route is created for it.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;5. Click save&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;6. Ensure that you are now receiving the summary route(s) in Azure or from your on-premises environment if you are using Global Reach. As shown in the diagram below, NSX T1 in AVS will exclusively advertise the summarized route 192.168.100.0/22. This route will be propagated to both Azure and on-premises environments via Global Reach&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;VWAN Hub Effective Routes after summarization&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;As highlighted below, I am now learning the /22 summarized route on the VWAN Effective Routes from AVS.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 19 Nov 2024 12:52:32 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/implementing-route-summarization-in-azure-vmware-solution/ba-p/4248155</guid>
      <dc:creator>jasonmedina</dc:creator>
      <dc:date>2024-11-19T12:52:32Z</dc:date>
    </item>
    <item>
      <title>Entra ID federation with Google Workspace</title>
      <link>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/entra-id-federation-with-google-workspace/ba-p/4244559</link>
      <description>&lt;P&gt;&lt;SPAN style="font-size: x-large;"&gt;&lt;STRONG&gt;Entra ID federation with&amp;nbsp;&lt;/STRONG&gt;&lt;STRONG&gt;Google Workspace &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;Google Workspace federation allows you to manage user identities in your Entra ID tenants while authenticating these users through Google. This can be beneficial if your company wants to use a single source of identities across different cloud platforms.&lt;/P&gt;
&lt;P&gt;This article covers the scenario where your domain is already added and verified in Entra ID.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Requirements&lt;/H3&gt;
&lt;P&gt;To use Google Workspace federation, you need to ensure that your federated users are created in the Entra ID directory. This can be done through various methods such as auto-provisioning or using the Graph API.&lt;/P&gt;
&lt;P&gt;Keep in mind, that the &lt;STRONG&gt;out-of-the box federation for Google only works for gmail.com personal accounts&lt;/STRONG&gt;. In order to federate with &lt;STRONG&gt;work accounts managed in Google Workspace, you have to configure SAML IDP federation&lt;/STRONG&gt;.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-13"&gt;&lt;STRONG&gt;Important:&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;Prior to configuring the federation make sure that the domain is added under Verified domains in Entra ID. Pay attention that domain is not yet shown as "Federated"&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;SPAN style="font-size: x-large;"&gt;Configuring SAML Federation on Google Workspace side&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;To configure SAML federation in Google Workspace, follow these steps:&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;1. Create a Web Application in Google Admin Panel:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Navigate to&amp;nbsp;&lt;A href="https://admin.google.com/" target="_blank" rel="noopener"&gt;https://admin.google.com/&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Go to&amp;nbsp;&lt;STRONG&gt;Apps&lt;/STRONG&gt;&amp;nbsp;-&amp;gt;&amp;nbsp;&lt;STRONG&gt;Web and mobile apps&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Click&amp;nbsp;&lt;STRONG&gt;Add app&lt;/STRONG&gt;&amp;nbsp;-&amp;gt;&amp;nbsp;&lt;STRONG&gt;Search for apps&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Search for "Microsoft Office 365" and install it.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;2. Download Metadata:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;After installing the app, go to the app details and download the metadata.&lt;/LI&gt;
&lt;LI&gt;Save the IdP metadata file (GoogleIDPMetadata.xml) as it will be used to set up Microsoft Entra ID later.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG style="font-family: inherit;"&gt;3. Enable Auto-Provisioning:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Enable auto-provisioning inside the "Microsoft Office 365" app.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;4. Configure Service Provider Details:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;On the Service Provider details page:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Select the option&amp;nbsp;&lt;STRONG&gt;Signed response&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;Verify that the&amp;nbsp;&lt;STRONG&gt;Name ID format&lt;/STRONG&gt;&amp;nbsp;is set to&amp;nbsp;&lt;STRONG&gt;PERSISTENT&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;Under SAML attribute mapping, select&amp;nbsp;&lt;STRONG&gt;Basic Information&lt;/STRONG&gt;&amp;nbsp;and map&amp;nbsp;&lt;STRONG&gt;Primary email&lt;/STRONG&gt;&amp;nbsp;to&amp;nbsp;&lt;STRONG&gt;IDPEmail&lt;/STRONG&gt;.
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;5. Enable the App for Users in Google Workspace:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Go to&amp;nbsp;&lt;STRONG&gt;Apps&lt;/STRONG&gt;&amp;nbsp;-&amp;gt;&amp;nbsp;&lt;STRONG&gt;Web and mobile apps&lt;/STRONG&gt;&amp;nbsp;-&amp;gt;&amp;nbsp;&lt;STRONG&gt;Microsoft Office 365&lt;/STRONG&gt;&amp;nbsp;-&amp;gt;&amp;nbsp;&lt;STRONG&gt;User access&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;Select&amp;nbsp;&lt;STRONG&gt;ON for everyone&lt;/STRONG&gt;&amp;nbsp;and save the changes.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;SPAN style="font-size: x-large;"&gt;Adding Federation for SAML Provider in Entra ID&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN class="lia-text-color-13"&gt;! Even though federation with external IDP is available via Entra ID or Azure Portal, you have to use PowerShell to create federation for verified domains.&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Using the IdP metadata XML file downloaded from Google Workspace, modify the&amp;nbsp;&lt;CODE&gt;$DomainName&lt;/CODE&gt;&amp;nbsp;variable in the following script to match your environment, and then run it in a PowerShell session:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser -Force Install-Module Microsoft.Graph -Scope CurrentUser Import-Module Microsoft.Graph $domainId = "&amp;lt;your domain name&amp;gt;" $xml = [Xml](Get-Content GoogleIDPMetadata.xml) $cert = -join $xml.EntityDescriptor.IDPSSODescriptor.KeyDescriptor.KeyInfo.X509Data.X509Certificate.Split() $issuerUri = $xml.EntityDescriptor.entityID $signinUri = $xml.EntityDescriptor.IDPSSODescriptor.SingleSignOnService | ? { $_.Binding.Contains('Redirect') } | % { $_.Location } $signoutUri = "https://accounts.google.com/logout" $displayName = "Google Workspace Identity" Connect-MGGraph -Scopes "Domain.ReadWrite.All", "Directory.AccessAsUser.All" $domainAuthParams = @{ DomainId = $domainId IssuerUri = $issuerUri DisplayName = $displayName ActiveSignInUri = $signinUri PassiveSignInUri = $signinUri SignOutUri = $signoutUri SigningCertificate = $cert PreferredAuthenticationProtocol = "saml" federatedIdpMfaBehavior = "acceptIfMfaDoneByFederatedIdp" } New-MgDomainFederationConfiguration @domainAuthParams&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To verify that the configuration is correct, you can use the following PowerShell command:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;Get-MgDomainFederationConfiguration -DomainId $domainId |fl&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN style="font-size: x-large;"&gt;Test the federation&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;To test the federation, navigate to &lt;A href="https://portal.azure.com" target="_blank" rel="noopener"&gt;https://portal.azure.com&lt;/A&gt; and sign in with a Google Workspace account:&lt;/P&gt;
&lt;P&gt;As username, use the email as defined in Google Workspace.&amp;nbsp;The user is redirected to Google Workspace to sign in.&amp;nbsp;&lt;BR /&gt;After Google Workspace authentication, the user is redirected back to Microsoft Entra ID and signed in.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN style="font-size: x-large;"&gt;Troubleshooting&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;If you configured federation &lt;STRONG&gt;after&lt;/STRONG&gt; users were created in Entra ID, it is possible that users will get an error&amp;nbsp;AADSTS51004&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;AADSTS51004: The user account XXX does not exist in the YYY directory. To sign into this application, the account must be added to the directory.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This error is most likely caused by property&amp;nbsp;&lt;EM&gt;ImmutableId &lt;/EM&gt;being incorrect.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For Google Workspace federation&amp;nbsp;&lt;EM&gt;ImmutableId&amp;nbsp;&lt;/EM&gt;has to be set as a primary email adress of the user.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Follow these steps to update the ImmutableID:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Convert the federated user to a cloud-only user (update the UPN to a non-federated domain)&lt;/LI&gt;
&lt;LI&gt;Update the ImmutableId&lt;/LI&gt;
&lt;LI&gt;Convert the user back to a federated user&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Here's a PowerShell example to update the ImmutableId for a federated user:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser -Force Install-Module Microsoft.Graph -Scope CurrentUser Import-Module Microsoft.Graph Connect-MgGraph -Scopes 'User.Read.All', 'User.ReadWrite.All' #1. Convert the user from federated to cloud-only Update-MgUser -UserId test@example.com -UserPrincipalName test@example.onmicrosoft.com #2. Convert the user back to federated, while setting the immutableId Update-MgUser -UserId test@example.onmicrosoft.com -UserPrincipalName test@example.com -OnPremisesImmutableId 'test@example.com'&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-size: x-large;"&gt;Conclusion&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;In summary, Entra ID federation with Google Workspace allows seamless user identity management and authentication across different cloud platforms.&lt;/P&gt;
&lt;P&gt;Hit me up in comments if this worked for you!&lt;/P&gt;</description>
      <pubDate>Sun, 30 Nov 2025 21:41:11 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/entra-id-federation-with-google-workspace/ba-p/4244559</guid>
      <dc:creator>irina_kostina</dc:creator>
      <dc:date>2025-11-30T21:41:11Z</dc:date>
    </item>
    <item>
      <title>How to monitor applications by using OpenTelemetry on Azure Container Apps</title>
      <link>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/how-to-monitor-applications-by-using-opentelemetry-on-azure/ba-p/4235035</link>
      <description>&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H2 class="heading-element" dir="auto" tabindex="-1"&gt;Overview&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;A id="user-content-overview" class="anchor" href="https://github.com/tsubasaxZZZ/TechcommunityBlog/blob/main/how-to-monitor-aca.md#overview" target="_blank" rel="noopener" aria-label="Permalink: Overview"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;There are multiple methods available to monitor applications deployed on Azure Container Apps. While the official documentation outlines these techniques, this article aims to organize them for better understanding.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Although this guide is focused on Java Spring Boot applications, the sections regarding Azure Container Apps are applicable to other programming languages and frameworks too.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H2 class="heading-element" dir="auto" tabindex="-1"&gt;How to Monitor Applications on Azure Container Apps&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;A id="user-content-how-to-monitor-applications-on-azure-container-apps" class="anchor" href="https://github.com/tsubasaxZZZ/TechcommunityBlog/blob/main/how-to-monitor-aca.md#how-to-monitor-applications-on-azure-container-apps" target="_blank" rel="noopener" aria-label="Permalink: How to Monitor Applications on Azure Container Apps"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;The following approaches can be employed to monitor applications running on Azure Container Apps:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL dir="auto"&gt;
&lt;LI&gt;&lt;A href="#community--1-1-using-app-insights" target="_self"&gt;Using Application Insights (Azure Monitor OpenTelemetry Distro)&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="#community--1-2-using-otel-col" target="_self"&gt;Using the OpenTelemetry Collector of the Container Apps Environment&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="#community--1-3-using-dapr" target="_self"&gt;Using Dapr&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="#community--1-4-forwarding-diag" target="_self"&gt;Forwarding system logs/console logs to Log Analytics&lt;/A&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;We will explain these monitoring methods in more detail later in this article.&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;In the latter section of this blog, we have provided concise guidance on using Application Insights and OpenTelemetry to monitor applications. If you're not well-versed in this topic, please refer to “&lt;STRONG&gt;&lt;A href="#community--1-monitoring-applications-with-appinsights-otel" target="_self"&gt;Monitoring Applications with Application Insights and OpenTelemetry&lt;/A&gt;&lt;/STRONG&gt;” section.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H3 class="heading-element" dir="auto" tabindex="-1"&gt;Demo Application Specifications&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;A id="user-content-demo-application-specifications" class="anchor" href="https://github.com/tsubasaxZZZ/TechcommunityBlog/blob/main/how-to-monitor-aca.md#demo-application-specifications" target="_blank" rel="noopener" aria-label="Permalink: Demo Application Specifications"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;&lt;SPAN&gt;For this article, we created a simple demo application that performs a quadrature operation on two GET request values and returns the result. The &lt;/SPAN&gt;&lt;CODE&gt;/calc&amp;nbsp;&lt;/CODE&gt;endpoint takes two parameters, and the controller sequentially invokes addition, subtraction, multiplication, and division APIs.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The API endpoints are within one Java app, capable of self-calling or being deployed as separate Container Apps.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this demo, Container Apps are deployed for the front end (app1) and each operation (add, sub, mul, div). Spring Boot's Controller in the front end calls each Container App endpoint.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H3 class="heading-element" dir="auto" tabindex="-1"&gt;&lt;U id="1-using-app-insights"&gt;1. Using Application Insights (Azure Monitor OpenTelemetry Distro)&lt;/U&gt;&lt;/H3&gt;
&lt;U&gt;&lt;A id="user-content-1-using-application-insights-azure-monitor-opentelemetry-distro" class="anchor" href="https://github.com/tsubasaxZZZ/TechcommunityBlog/blob/main/how-to-monitor-aca.md#1-using-application-insights-azure-monitor-opentelemetry-distro" target="_blank" rel="noopener" aria-label="Permalink: 1. Using Application Insights (Azure Monitor OpenTelemetry Distro)"&gt;&lt;/A&gt;&lt;/U&gt;&lt;/DIV&gt;
&lt;DIV id="tinyMceEditortsunomur_0" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This method requires incorporating the Application Insights agent either within the application's container or directly into the application itself. Refer&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-monitor/app/opentelemetry-enable?tabs=java" target="_blank" rel="nofollow noopener"&gt;Enable Azure Monitor OpenTelemetry for .NET, Node.js, Python, and Java applications&lt;/A&gt;&amp;nbsp;for more details.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;It doesn't depend on Azure Container Apps, so if you're already monitoring with Application Insights, no changes are needed. Naturally, you can use all the features available in Application Insights.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;However, each application (each Container App) needs to specify the Application Insights agent or embed it in the code, which could increase the container image size and require connection string management.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H4 class="heading-element" dir="auto" tabindex="-1"&gt;Transaction view&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;A id="user-content-transaction-view" class="anchor" href="https://github.com/tsubasaxZZZ/TechcommunityBlog/blob/main/how-to-monitor-aca.md#transaction-view" target="_blank" rel="noopener" aria-label="Permalink: Transaction view"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;Here's how a transaction and its exception appear.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can observe that app1's Container App calls the add, sub, mul, div Container Apps.&amp;nbsp;The division by zero result is displayed, showing where and what type of exception occurred.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;EM&gt;You can see a larger image by clicking on each screenshot.&lt;/EM&gt;&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H4 class="heading-element" dir="auto" tabindex="-1"&gt;&amp;nbsp;&lt;/H4&gt;
&lt;H4 class="heading-element" dir="auto" tabindex="-1"&gt;Metric&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;A id="user-content-metric" class="anchor" href="https://github.com/tsubasaxZZZ/TechcommunityBlog/blob/main/how-to-monitor-aca.md#metric" target="_blank" rel="noopener" aria-label="Permalink: Metric"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;The metrics produced by the OpenTelemetry Meter are visible in Application Insights.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H3 class="heading-element" dir="auto" tabindex="-1"&gt;&lt;U id="2-using-otel-col"&gt;2. Using the OpenTelemetry Collector of the Container Apps Environment&lt;/U&gt;&lt;/H3&gt;
&lt;A id="user-content-2-using-the-opentelemetry-collector-of-the-container-apps-environment" class="anchor" href="https://github.com/tsubasaxZZZ/TechcommunityBlog/blob/main/how-to-monitor-aca.md#2-using-the-opentelemetry-collector-of-the-container-apps-environment" target="_blank" rel="noopener" aria-label="Permalink: 2. Using the OpenTelemetry Collector of the Container Apps Environment"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the Container Apps Environment, you can enable the OpenTelemetry Collector to receive data output by the OpenTelemetry SDK from Container Apps.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Similar to using Application Insights, this method also requires specifying or implementing the OpenTelemetry agent, but it eliminates the need to manage connection strings for each Container App by not using Application Insights.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Although you won't use Application Insights per Container App, you need to specify the Application Insights connection string per Container Apps Environment if you use it as an export destination.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This option is the easiest if you're already using OpenTelemetry. However, currently, metrics cannot be sent to Application Insights. The following table is on&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/container-apps/opentelemetry-agents?tabs=arm#initialize-endpoints" target="_blank" rel="nofollow noopener"&gt;Collect and read OpenTelemetry data in Azure Container Apps (preview)&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H4 class="heading-element" dir="auto" tabindex="-1"&gt;How to Enable OpenTelemetry collector&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;A id="user-content-how-to-enable" class="anchor" href="https://github.com/tsubasaxZZZ/TechcommunityBlog/blob/main/how-to-monitor-aca.md#how-to-enable" target="_blank" rel="noopener" aria-label="Permalink: How to Enable"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;&lt;SPAN&gt;Run the following command while setting up the Container Apps Environment to enable this feature. Once activated, the&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;OTEL_EXPORTER_OTLP_ENDPOINT&lt;/CODE&gt; environment variable will be automatically configured for the application.&lt;/P&gt;
&lt;/DIV&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;UL dir="auto"&gt;
&lt;LI&gt;Azure CLI&lt;/LI&gt;
&lt;/UL&gt;
&lt;/DIV&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;DIV class="highlight highlight-source-shell notranslate position-relative overflow-auto" dir="auto"&gt;
&lt;PRE&gt;az containerapp env telemetry app-insights &lt;SPAN class="pl-c1"&gt;set&lt;/SPAN&gt; -n aca-otelsample -g rg-opentelemetry --connection-string &lt;SPAN class="pl-k"&gt;&amp;lt;&lt;/SPAN&gt;Connection String&lt;SPAN class="pl-k"&gt;&amp;gt;&lt;/SPAN&gt; --enable-open-telemetry-traces &lt;SPAN class="pl-c1"&gt;true&lt;/SPAN&gt; --enable-open-telemetry-logs &lt;SPAN class="pl-c1"&gt;true&lt;/SPAN&gt;&lt;/PRE&gt;
&lt;UL&gt;
&lt;LI class="zeroclipboard-container"&gt;&lt;SPAN&gt;Azure Portal&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;img /&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H4 class="heading-element" dir="auto" tabindex="-1"&gt;Transaction view&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Similar to &lt;STRONG&gt;&lt;A href="#community--1-1-using-app-insights" target="_self"&gt;method #1&lt;/A&gt;&lt;/STRONG&gt;, it displays the service call relationships. You can observe that the application sequentially calls the add, sub, mul, and div Container Apps.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H3 class="heading-element" dir="auto" tabindex="-1"&gt;&lt;U id="3-using-dapr"&gt;3. Using Dapr&lt;/U&gt;&lt;/H3&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;Using Container Apps with Dapr allows you to make requests between Container Apps, like&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;http://localhost:3500/v1.0/invoke/checkout/method/checkout/100&lt;/CODE&gt;.&amp;nbsp;The Dapr container, working as a sidecar in Container Apps, intercepts the request and directs it to the appropriate Container App.&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Currently, Application Insights can receive traces of requests passing through Dapr. It leverages the Application Insights connection string configured in the Container Apps Environment. To enable this, you simply need to activate Dapr in Container Apps, without needing an Application Insights (OpenTelemetry) agent configuration. This method is the simplest if using Dapr is assumed.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;It's important to note that Dapr can only monitor inter-service communication; it does not capture intra-service traces or metrics. This means activities such as an application within Container Apps accessing its own API endpoint or generating logs are not monitored.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;However, if the API is built as a separate Container App and the communication between them uses Dapr, it will be tracked. To gather intra-service data, you must use Application Insights or OpenTelemetry agents as outlined in &lt;STRONG&gt;method &lt;A href="#community--1-1-using-app-insights" target="_self"&gt;#1&lt;/A&gt; and &lt;A href="#community--1-2-using-otel-col" target="_self"&gt;#2&lt;/A&gt;&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Please note that you cannot use &lt;STRONG&gt;&lt;A href="#community--1-2-using-otel-col" target="_self"&gt;method #2&lt;/A&gt;&lt;/STRONG&gt; at the same time (an error will occur when setting the connection string). This limitation applies only to sending data to Application Insights and does not affect the use of Dapr itself.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H4 class="heading-element focus-visible" dir="auto" tabindex="-1" data-focus-visible-added=""&gt;How to enable Dapr based instrumentation&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;A id="user-content-how-to-enable-1" class="anchor" href="https://github.com/tsubasaxZZZ/TechcommunityBlog/blob/main/how-to-monitor-aca.md#how-to-enable-1" target="_blank" rel="noopener" aria-label="Permalink: How to Enable"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;&lt;SPAN&gt;The command below sets up a Container Apps Environment. The flag&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;--dapr-instrumentation-key&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;is used to indicate that Dapr should utilize the Instrumentation key from Application Insights.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL dir="auto"&gt;
&lt;LI&gt;Azure CLI&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="highlight highlight-source-shell notranslate position-relative overflow-auto" dir="auto"&gt;
&lt;PRE&gt;az containerapp env create --name aca-otelsample --resource-group rg-opentelemetry --location japaneast --dapr-connection-string &lt;SPAN class="pl-k"&gt;&amp;lt;&lt;/SPAN&gt;Connection String&lt;SPAN class="pl-k"&gt;&amp;gt;&lt;/SPAN&gt;&lt;/PRE&gt;
&lt;DIV class="zeroclipboard-container"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;&lt;SPAN&gt;Details of the command can be found at &lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/cli/azure/containerapp/env?view=azure-cli-latest#az-containerapp-env-create" target="_blank" rel="nofollow noopener"&gt;az containerapp env create&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL dir="auto"&gt;
&lt;LI&gt;Bicep&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN&gt;This Bicep code corresponds to the Azure CLI&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;--dapr-instrumentation-key&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;option.&amp;nbsp;You can include a connection string within the properties section.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="highlight highlight-source-bicep notranslate position-relative overflow-auto" dir="auto"&gt;
&lt;PRE&gt;&lt;SPAN class="pl-k"&gt;resource&lt;/SPAN&gt; &lt;SPAN class="pl-smi"&gt;env&lt;/SPAN&gt; &lt;SPAN class="pl-s"&gt;'Microsoft.App/managedEnvironments@2024-03-01'&lt;/SPAN&gt; = {
    &lt;SPAN class="pl-smi"&gt;name&lt;/SPAN&gt;: &lt;SPAN class="pl-smi"&gt;aca_env_name&lt;/SPAN&gt;
    &lt;SPAN class="pl-smi"&gt;location&lt;/SPAN&gt;: &lt;SPAN class="pl-smi"&gt;location&lt;/SPAN&gt;
    &lt;SPAN class="pl-smi"&gt;properties&lt;/SPAN&gt;: {
      &lt;SPAN class="pl-smi"&gt;appLogsConfiguration&lt;/SPAN&gt;: {
        &lt;SPAN class="pl-smi"&gt;destination&lt;/SPAN&gt;: &lt;SPAN class="pl-s"&gt;'log-analytics'&lt;/SPAN&gt;
        &lt;SPAN class="pl-smi"&gt;logAnalyticsConfiguration&lt;/SPAN&gt;: {
          &lt;SPAN class="pl-smi"&gt;customerId&lt;/SPAN&gt;: &lt;SPAN class="pl-smi"&gt;la&lt;/SPAN&gt;.&lt;SPAN class="pl-smi"&gt;properties&lt;/SPAN&gt;.&lt;SPAN class="pl-smi"&gt;customerId&lt;/SPAN&gt;
          &lt;SPAN class="pl-smi"&gt;sharedKey&lt;/SPAN&gt;: &lt;SPAN class="pl-smi"&gt;la&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;listKeys&lt;/SPAN&gt;().&lt;SPAN class="pl-smi"&gt;primarySharedKey&lt;/SPAN&gt;
        }
      }
      &lt;SPAN class="pl-smi"&gt;zoneRedundant&lt;/SPAN&gt;: &lt;SPAN class="pl-c1"&gt;false&lt;/SPAN&gt;
      &lt;SPAN class="pl-smi"&gt;daprAIConnectionString&lt;/SPAN&gt;: &amp;lt;&lt;SPAN class="pl-smi"&gt;Connection&lt;/SPAN&gt; &lt;SPAN class="pl-smi"&gt;String&lt;/SPAN&gt;&amp;gt; &lt;SPAN class="pl-c"&gt;// &amp;lt;----- Setting the Application Insights connection string&lt;/SPAN&gt;
      &lt;SPAN class="pl-smi"&gt;workloadProfiles&lt;/SPAN&gt;: [
        {
          &lt;SPAN class="pl-smi"&gt;workloadProfileType&lt;/SPAN&gt;: &lt;SPAN class="pl-s"&gt;'Consumption'&lt;/SPAN&gt;
          &lt;SPAN class="pl-smi"&gt;name&lt;/SPAN&gt;: &lt;SPAN class="pl-s"&gt;'Consumption'&lt;/SPAN&gt;
        }
      ]
      &lt;SPAN class="pl-smi"&gt;peerAuthentication&lt;/SPAN&gt;: {
        &lt;SPAN class="pl-smi"&gt;mtls&lt;/SPAN&gt;: {
          &lt;SPAN class="pl-smi"&gt;enabled&lt;/SPAN&gt;: &lt;SPAN class="pl-c1"&gt;false&lt;/SPAN&gt;
        }
      }
    }
  }&lt;/PRE&gt;
&lt;DIV class="zeroclipboard-container"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H4 class="heading-element" dir="auto" tabindex="-1"&gt;Transaction view&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In &lt;STRONG&gt;method &lt;A href="#community--1-1-using-app-insights" target="_self"&gt;#1&lt;/A&gt; and &lt;A href="#community--1-2-using-otel-col" target="_self"&gt;#2&lt;/A&gt;&lt;/STRONG&gt;, the front-end application (app1) sequentially invoked REST APIs for each quadratic operation. Conversely, in the method involving Dapr, each REST API call seems isolated. This occurs because Dapr handles communication between Container Apps without visibility into the internal processing of the application.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H3 class="heading-element" dir="auto" tabindex="-1"&gt;&lt;U id="4-forwarding-diag"&gt;4. Forwarding System Logs/Console Logs to Log Analytics&lt;/U&gt;&lt;/H3&gt;
&lt;A id="user-content-4-forwarding-system-logsconsole-logs-to-log-analytics" class="anchor" href="https://github.com/tsubasaxZZZ/TechcommunityBlog/blob/main/how-to-monitor-aca.md#4-forwarding-system-logsconsole-logs-to-log-analytics" target="_blank" rel="noopener" aria-label="Permalink: 4. Forwarding System Logs/Console Logs to Log Analytics"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Within the Container Apps Environment, you have the ability to forward both system and console logs to Log Analytics. Even though only console logs are collected (excluding events or metrics), this method is the simplest for outputting platform logs without altering the application's code or Container Apps settings.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The following document details the steps for activating logging in Container Apps to Log Analytics:&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/container-apps/log-options" target="_blank" rel="nofollow noopener"&gt;Log storage and monitoring options in Azure Container Apps&lt;/A&gt;&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H4 class="heading-element" dir="auto" tabindex="-1"&gt;&amp;nbsp;&lt;/H4&gt;
&lt;H4 class="heading-element" dir="auto" tabindex="-1"&gt;How to view logs&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As observed, without a corresponding ID, it becomes challenging to track application transactions based on the logs produced in Log Analytics.&lt;/P&gt;
&lt;/DIV&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H2 id="monitoring-applications-with-appinsights-otel" class="heading-element" dir="auto" tabindex="-1"&gt;Monitoring Applications with Application Insights and OpenTelemetry&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;A id="user-content-monitoring-applications-with-application-insights-and-opentelemetry" class="anchor" href="https://github.com/tsubasaxZZZ/TechcommunityBlog/blob/main/how-to-monitor-aca.md#monitoring-applications-with-application-insights-and-opentelemetry" target="_blank" rel="noopener" aria-label="Permalink: Monitoring Applications with Application Insights and OpenTelemetry"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;For those new to Application Insights and OpenTelemetry, here is a brief introduction. If you are already knowledgeable about these topics, feel free to skip ahead to the &lt;STRONG&gt;&lt;A href="#community--1-conclusion" target="_self"&gt;Conclusion&lt;/A&gt;&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H3 class="heading-element" dir="auto" tabindex="-1"&gt;What Application Insights Can Do&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;A id="user-content-what-application-insights-can-do" class="anchor" href="https://github.com/tsubasaxZZZ/TechcommunityBlog/blob/main/how-to-monitor-aca.md#what-application-insights-can-do" target="_blank" rel="noopener" aria-label="Permalink: What Application Insights Can Do"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;Application Insights is an Azure-native service designed for application monitoring. This tool allows you to collect logs, metrics, and traces from your applications and visualize the data.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Refer to the&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-monitor/app/app-insights-overview" target="_blank" rel="nofollow noopener"&gt;Application Insights overview&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;for a summary of the Application Insights.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Application Insights is an Azure service compatible with OpenTelemetry, the open standard for monitoring. It supports various language SDKs and can receive data from OpenTelemetry, enabling application monitoring without code changes. This service conveniently handles traces, logs, and metrics all in one place.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here is a brief guide on how to monitor your Java application using Application Insights.&amp;nbsp;There are two ways to utilize Application Insights from Java:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Specifying the Azure Monitor OpenTelemetry Distro as a Java agent&lt;/LI&gt;
&lt;LI&gt;Integrating it directly in the source code&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The agent method requires substituting the OpenTelemetry agent with the Azure Monitor OpenTelemetry Distro. Before launching your Java applications with the Java agent, you need to set up the connection string in an environment variable or in configuration file as illustrated below.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;&lt;SPAN class="pl-k"&gt;export&lt;/SPAN&gt; APPLICATIONINSIGHTS_CONNECTION_STRING=&lt;SPAN class="pl-k"&gt;&amp;lt;&lt;/SPAN&gt;Your Connection String&lt;SPAN class="pl-k"&gt;&amp;gt;&lt;/SPAN&gt;
  java -javaagent:&lt;SPAN class="pl-s"&gt;&lt;SPAN class="pl-pds"&gt;"&lt;/SPAN&gt;path/to/applicationinsights-agent-3.5.3.jar&lt;SPAN class="pl-pds"&gt;"&lt;/SPAN&gt;&lt;/SPAN&gt; -jar myapp.jar&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Refer to the "Enable using code" section of the following document for implementation details.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-monitor/app/java-spring-boot" target="_blank" rel="nofollow noopener"&gt;Using Azure Monitor Application Insights with Spring Boot&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The Java agent for Application Insights acts as a wrapper around OpenTelemetry components, offering comparable capabilities. In addition, it includes extended implementations that leverage the features available in Application Insights. For further details, refer&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-monitor/app/opentelemetry-add-modify?tabs=aspnetcore#why-should-i-use-the-azure-monitor-opentelemetry-distro" target="_blank" rel="nofollow noopener"&gt;Why should I use the "Azure Monitor OpenTelemetry Distro"?&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H3 class="heading-element" dir="auto" tabindex="-1"&gt;What OpenTelemetry Can Do&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;A id="user-content-what-opentelemetry-can-do" class="anchor" href="https://github.com/tsubasaxZZZ/TechcommunityBlog/blob/main/how-to-monitor-aca.md#what-opentelemetry-can-do" target="_blank" rel="noopener" aria-label="Permalink: What OpenTelemetry Can Do"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;&lt;SPAN&gt;OpenTelemetry enables the collection of logs, metrics, and traces, which can be integrated with various compliant tools. For an overview, visit the&lt;/SPAN&gt;&lt;A href="https://opentelemetry.io/docs/what-is-opentelemetry/" target="_blank" rel="nofollow noopener"&gt;&amp;nbsp;official site&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Key points are:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;P&gt;You retain ownership of your data without vendor lock-in&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;You need to learn only one set of APIs and conventions.&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Similar to Application Insights integration discussed above, you can add OpenTelemetry to a Java app in two ways&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Using the OpenTelemetry agent as a Java agent via a command-line argument (codeless instrumentation)&lt;/LI&gt;
&lt;LI&gt;Adding it directly into the source code (code-based instrumentation)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;To use codeless instrumentation approach, just add the Java agent as a java command argument for seamless integration without modifying your code:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;# If Application Insights is used, this OpenTelemetry Java agent can be replaced by an Application Insights agent as discussed earlier.&lt;BR /&gt;java -javaagent:./opentelemetry-javaagent.jar -jar myapp.jar&lt;/PRE&gt;
&lt;P&gt;You will also need to create an OpenTelemetry instance using Java Agent if you want to send metrics and logs, using the following code:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;&lt;SPAN class="pl-k"&gt;    package&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;com&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;example&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;otelsample&lt;/SPAN&gt;;
  &lt;SPAN class="pl-k"&gt;import&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;io&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;opentelemetry&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;api&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;OpenTelemetry&lt;/SPAN&gt;;
  &lt;SPAN class="pl-k"&gt;import&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;org&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;springframework&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;context&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;annotation&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;Bean&lt;/SPAN&gt;;
  &lt;SPAN class="pl-k"&gt;import&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;org&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;springframework&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;context&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;annotation&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;Configuration&lt;/SPAN&gt;;
  &lt;SPAN class="pl-k"&gt;import&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;io&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;opentelemetry&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;api&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;GlobalOpenTelemetry&lt;/SPAN&gt;;
  
  &lt;SPAN class="pl-c1"&gt;@&lt;/SPAN&gt;&lt;SPAN class="pl-c1"&gt;Configuration&lt;/SPAN&gt;
  &lt;SPAN class="pl-k"&gt;public&lt;/SPAN&gt; &lt;SPAN class="pl-k"&gt;class&lt;/SPAN&gt; &lt;SPAN class="pl-smi"&gt;OpenTelemetryConfig&lt;/SPAN&gt; {
      &lt;SPAN class="pl-c1"&gt;@&lt;/SPAN&gt;&lt;SPAN class="pl-c1"&gt;Bean&lt;/SPAN&gt;
      &lt;SPAN class="pl-k"&gt;public&lt;/SPAN&gt; &lt;SPAN class="pl-smi"&gt;OpenTelemetry&lt;/SPAN&gt; &lt;SPAN class="pl-en"&gt;openTelemetry&lt;/SPAN&gt;() {
          &lt;SPAN class="pl-k"&gt;return&lt;/SPAN&gt; &lt;SPAN class="pl-smi"&gt;GlobalOpenTelemetry&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;get&lt;/SPAN&gt;();
      }
  }&lt;/PRE&gt;
&lt;P&gt;After that step, you can output logs, generate events, and create metrics with OpenTelemetry using the code below:&lt;/P&gt;
&lt;DIV class="highlight highlight-source-java notranslate position-relative overflow-auto" dir="auto"&gt;
&lt;PRE&gt;&lt;SPAN class="pl-c"&gt;    // Start an OpenTelemetry span and send an event&lt;/SPAN&gt;
  &lt;SPAN class="pl-smi"&gt;Span&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;span&lt;/SPAN&gt; = &lt;SPAN class="pl-s1"&gt;tracer&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;spanBuilder&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"CalcController"&lt;/SPAN&gt;).&lt;SPAN class="pl-en"&gt;startSpan&lt;/SPAN&gt;();
  &lt;SPAN class="pl-s1"&gt;span&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;addEvent&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"Start calc"&lt;/SPAN&gt;);
  
  &lt;SPAN class="pl-c"&gt;// Output a log&lt;/SPAN&gt;
  &lt;SPAN class="pl-s1"&gt;logger&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;info&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"Dummy log message"&lt;/SPAN&gt;);
  
  &lt;SPAN class="pl-c"&gt;// Create a counter&lt;/SPAN&gt;
  &lt;SPAN class="pl-s1"&gt;counter&lt;/SPAN&gt; = &lt;SPAN class="pl-s1"&gt;meter&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;counterBuilder&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"calc.counter"&lt;/SPAN&gt;)
          .&lt;SPAN class="pl-en"&gt;setDescription&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"How many times the calculator has been run."&lt;/SPAN&gt;)
          .&lt;SPAN class="pl-en"&gt;setUnit&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"runs"&lt;/SPAN&gt;)
          .&lt;SPAN class="pl-en"&gt;build&lt;/SPAN&gt;();
  &lt;SPAN class="pl-s1"&gt;counter&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;add&lt;/SPAN&gt;(&lt;SPAN class="pl-c1"&gt;1&lt;/SPAN&gt;);&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For code-based instrumentation implementation, refer to the documentation at&amp;nbsp;&lt;A href="https://opentelemetry.io/docs/languages/java/instrumentation/#manual-instrumentation-setup" target="_blank" rel="nofollow noopener"&gt;https://opentelemetry.io/docs/languages/java/instrumentation/#manual-instrumentation-setup&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You need to collect and visualize data from OpenTelemetry. Open-source tools like Jaeger and Zipkin handle traces and logs, while Prometheus collects metrics that can be visualized with Grafana.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Using the OpenTelemetry Collector lets you gather data without embedding code in your application for each tool. It acts as an interface between your app and various tools. The &lt;A title="OpenTelemetry Collector" href="https://opentelemetry.io/docs/demo/architecture/" target="_blank" rel="noopener"&gt;demo architecture&lt;/A&gt; in the official documentation is easy to understand.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;DIV id="tinyMceEditortsunomur_0" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;EM&gt;"Demo Architecture", OpenTelemetry.io,&amp;nbsp;&lt;A href="https://opentelemetry.io/docs/demo/architecture/" target="_blank" rel="nofollow noopener"&gt;https://opentelemetry.io/docs/demo/architecture/&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;Applications send data to the OpenTelemetry Collector's endpoint (http://localhost:&amp;lt;port&amp;gt;/, where port is 4317&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;or 4318), and the backend endpoints are specified in the OpenTelemetry Collector's configuration.&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For example, here is a configuration for the OpenTelemetry Collector:&lt;/P&gt;
&lt;PRE&gt;&lt;SPAN class="pl-ent"&gt;receivers&lt;/SPAN&gt;:
    &lt;SPAN class="pl-ent"&gt;otlp&lt;/SPAN&gt;:
      &lt;SPAN class="pl-ent"&gt;protocols&lt;/SPAN&gt;:
        &lt;SPAN class="pl-ent"&gt;grpc&lt;/SPAN&gt;:
          &lt;SPAN class="pl-ent"&gt;endpoint&lt;/SPAN&gt;: &lt;SPAN class="pl-s"&gt;0.0.0.0:4317&lt;/SPAN&gt;
        &lt;SPAN class="pl-ent"&gt;http&lt;/SPAN&gt;:
          &lt;SPAN class="pl-ent"&gt;endpoint&lt;/SPAN&gt;: &lt;SPAN class="pl-s"&gt;0.0.0.0:4318&lt;/SPAN&gt;
  
  &lt;SPAN class="pl-ent"&gt;exporters&lt;/SPAN&gt;:
    &lt;SPAN class="pl-ent"&gt;debug&lt;/SPAN&gt;:
      &lt;SPAN class="pl-ent"&gt;verbosity&lt;/SPAN&gt;: &lt;SPAN class="pl-s"&gt;detailed&lt;/SPAN&gt;
    &lt;SPAN class="pl-ent"&gt;otlp/jaeger&lt;/SPAN&gt;:
      &lt;SPAN class="pl-ent"&gt;endpoint&lt;/SPAN&gt;: &lt;SPAN class="pl-s"&gt;jaeger-all-in-one:4317&lt;/SPAN&gt;
      &lt;SPAN class="pl-ent"&gt;tls&lt;/SPAN&gt;:
        &lt;SPAN class="pl-ent"&gt;insecure&lt;/SPAN&gt;: &lt;SPAN class="pl-c1"&gt;true&lt;/SPAN&gt;
    &lt;SPAN class="pl-ent"&gt;prometheus&lt;/SPAN&gt;:
      &lt;SPAN class="pl-ent"&gt;endpoint&lt;/SPAN&gt;: &lt;SPAN class="pl-s"&gt;&lt;SPAN class="pl-pds"&gt;"&lt;/SPAN&gt;0.0.0.0:8889&lt;SPAN class="pl-pds"&gt;"&lt;/SPAN&gt;&lt;/SPAN&gt;
      &lt;SPAN class="pl-ent"&gt;const_labels&lt;/SPAN&gt;:
        &lt;SPAN class="pl-ent"&gt;label1&lt;/SPAN&gt;: &lt;SPAN class="pl-s"&gt;value1&lt;/SPAN&gt;
    &lt;SPAN class="pl-ent"&gt;zipkin&lt;/SPAN&gt;:
      &lt;SPAN class="pl-ent"&gt;endpoint&lt;/SPAN&gt;: &lt;SPAN class="pl-s"&gt;&lt;SPAN class="pl-pds"&gt;"&lt;/SPAN&gt;http://zipkin-all-in-one:9411/api/v2/spans&lt;SPAN class="pl-pds"&gt;"&lt;/SPAN&gt;&lt;/SPAN&gt;
      &lt;SPAN class="pl-ent"&gt;format&lt;/SPAN&gt;: &lt;SPAN class="pl-s"&gt;proto&lt;/SPAN&gt;
  &lt;SPAN class="pl-ent"&gt;processors&lt;/SPAN&gt;:
    &lt;SPAN class="pl-ent"&gt;batch&lt;/SPAN&gt;:
  
  &lt;SPAN class="pl-ent"&gt;extensions&lt;/SPAN&gt;:
    &lt;SPAN class="pl-ent"&gt;health_check&lt;/SPAN&gt;:
    &lt;SPAN class="pl-ent"&gt;pprof&lt;/SPAN&gt;:
      &lt;SPAN class="pl-ent"&gt;endpoint&lt;/SPAN&gt;: &lt;SPAN class="pl-c1"&gt;:1888&lt;/SPAN&gt;
    &lt;SPAN class="pl-ent"&gt;zpages&lt;/SPAN&gt;:
      &lt;SPAN class="pl-ent"&gt;endpoint&lt;/SPAN&gt;: &lt;SPAN class="pl-c1"&gt;:55679&lt;/SPAN&gt;
  
  &lt;SPAN class="pl-ent"&gt;service&lt;/SPAN&gt;:
    &lt;SPAN class="pl-ent"&gt;telemetry&lt;/SPAN&gt;:
      &lt;SPAN class="pl-ent"&gt;logs&lt;/SPAN&gt;:
        &lt;SPAN class="pl-ent"&gt;level&lt;/SPAN&gt;: &lt;SPAN class="pl-s"&gt;DEBUG&lt;/SPAN&gt;
        &lt;SPAN class="pl-ent"&gt;sampling&lt;/SPAN&gt;:
          &lt;SPAN class="pl-ent"&gt;enabled&lt;/SPAN&gt;: &lt;SPAN class="pl-c1"&gt;false&lt;/SPAN&gt;
    &lt;SPAN class="pl-ent"&gt;extensions&lt;/SPAN&gt;: &lt;SPAN class="pl-s"&gt;[pprof, zpages, health_check]&lt;/SPAN&gt;
    &lt;SPAN class="pl-ent"&gt;pipelines&lt;/SPAN&gt;:
      &lt;SPAN class="pl-ent"&gt;traces&lt;/SPAN&gt;:
        &lt;SPAN class="pl-ent"&gt;receivers&lt;/SPAN&gt;: &lt;SPAN class="pl-s"&gt;[otlp]&lt;/SPAN&gt;
        &lt;SPAN class="pl-ent"&gt;exporters&lt;/SPAN&gt;: &lt;SPAN class="pl-s"&gt;[debug, zipkin, otlp/jaeger]&lt;/SPAN&gt;
      &lt;SPAN class="pl-ent"&gt;metrics&lt;/SPAN&gt;:
        &lt;SPAN class="pl-ent"&gt;receivers&lt;/SPAN&gt;: &lt;SPAN class="pl-s"&gt;[otlp]&lt;/SPAN&gt;
        &lt;SPAN class="pl-ent"&gt;processors&lt;/SPAN&gt;: &lt;SPAN class="pl-s"&gt;[batch]&lt;/SPAN&gt;
        &lt;SPAN class="pl-ent"&gt;exporters&lt;/SPAN&gt;: &lt;SPAN class="pl-s"&gt;[debug, prometheus]&lt;/SPAN&gt;
      &lt;SPAN class="pl-ent"&gt;logs&lt;/SPAN&gt;:
        &lt;SPAN class="pl-ent"&gt;receivers&lt;/SPAN&gt;: &lt;SPAN class="pl-s"&gt;[otlp]&lt;/SPAN&gt;
        &lt;SPAN class="pl-ent"&gt;exporters&lt;/SPAN&gt;: &lt;SPAN class="pl-s"&gt;[debug]&lt;BR /&gt;&lt;/SPAN&gt;&lt;/PRE&gt;
&lt;P&gt;&lt;EM&gt;This configuration is based on "Configuration", OpenTelemetry.io, &lt;A href="https://opentelemetry.io/docs/collector/configuration/" target="_blank" rel="noopener"&gt;https://opentelemetry.io/docs/collector/configuration/&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;In the receivers, set the receiving endpoints, and in the exporters, configure the export destinations. By configuring services in the pipelines, you can receive traces at the OTLP endpoint and export to Zipkin.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;OpenTelemetry avoids vendor lock-in, offering developers choices but requiring various tools. For example, you might use Jaeger for logs and Prometheus for metrics, each needing specific configurations.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Application Insights simplifies this by providing data collection to visualization in one service, negating the need to learn multiple services.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;/DIV&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H2 id="conclusion" class="heading-element" dir="auto" tabindex="-1"&gt;Conclusion&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;A id="user-content-conclusion" class="anchor" href="https://github.com/tsubasaxZZZ/TechcommunityBlog/blob/main/how-to-monitor-aca.md#conclusion" target="_blank" rel="noopener" aria-label="Permalink: Conclusion"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;P&gt;As demonstrated, there are multiple ways to monitor Azure Container Apps. We hope you find the method that fits your application needs and preferred implementation approach.&lt;/P&gt;
&lt;/DIV&gt;</description>
      <pubDate>Tue, 10 Sep 2024 03:49:44 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/how-to-monitor-applications-by-using-opentelemetry-on-azure/ba-p/4235035</guid>
      <dc:creator>tsunomur</dc:creator>
      <dc:date>2024-09-10T03:49:44Z</dc:date>
    </item>
    <item>
      <title>Leveraging dynamic few-shot prompt with Azure OpenAI</title>
      <link>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/leveraging-dynamic-few-shot-prompt-with-azure-openai/ba-p/4225235</link>
      <description>&lt;P&gt;&lt;FONT size="3"&gt;Few-shot prompt is a technique used in natural language processing (NLP) where a model is given a small number of examples (or “shots”) to learn from before generating a response or completing a task. This approach is particularly beneficial because it allows the model to understand and adapt to new tasks with minimal data, making it highly efficient and versatile. By leveraging few-shot prompts, users can achieve high-quality results without the need for extensive training datasets, thus saving time and computational resources. Additionally, this method enhances the model’s ability to generalize from limited examples, leading to more accurate and contextually relevant outputs.&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;One challenge of using few-shot prompts is that as the number of examples increases, the prompt can become excessively large, leading to inefficiencies and potential performance issues. To address this, a dynamic few-shot prompt technique can be employed. In this approach, a comprehensive list of prompts is stored in a vector store, and the user’s input is matched against this vector store to identify the most relevant examples. By utilizing OpenAI embeddings alongside the vector store, this method ensures that only the most pertinent examples are included in the prompt, thereby optimizing its size and relevance. This dynamic technique not only maintains the efficiency and effectiveness of few-shot learning but also enhances the model’s ability to generate accurate and contextually appropriate responses.&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;Architecture&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;The diagram above shows the overall architecture of the solution. Let's break down each component:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Vector Store - This store will hold the few-shot prompt examples. It is indexed by each example's input and the content is the input/output pair&lt;/LI&gt;
&lt;LI&gt;Embedding Model - This model is responsible for transforming the user input into a vector, which can be used to query the vector store&lt;/LI&gt;
&lt;LI&gt;GPT Model - This is the model we will be using for chat completion, it's the one responsible for providing answers to the user&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;Use Case&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;To demonstrate how dynamic few-shot prompt can be used, let's consider a scenario where we have a chat completion that can handle 3 different types of tasks:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Display data in a table format&lt;/LI&gt;
&lt;LI&gt;Classify texts&lt;/LI&gt;
&lt;LI&gt;Summarize texts&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;For each of this tasks we want to provide some examples, in order to better explain the model what it should do.&lt;/P&gt;
&lt;P&gt;One simple way of doing this is to provide all examples related to these tasks in the prompt itself. This strategy comes with a few downsides:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Information Overload&lt;/STRONG&gt;: Too many examples can overwhelm the model, making it difficult to discern the main request.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Confusion&lt;/STRONG&gt;: The model might get confused and generate responses that are off-topic or irrelevant.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Accuracy Issues&lt;/STRONG&gt;: The focus on examples can detract from the accuracy of the response.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Cost&lt;/STRONG&gt;: More examples mean more tokens to be processed by the model, which increases cost.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;It's easy to see that this strategy doesn't scale in case we need to support other types of tasks, since it would require more examples.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;On the other hand, if we use dynamic few-shot prompt, we just use the most relevant examples (for the given user input) in the prompt. For an example we can decide that we only want to use the top 3 most relevant examples in the prompt.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To implement this strategy we just need the steps below:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Define examples list
&lt;UL&gt;
&lt;LI&gt;Each example is an object that contains an input (user question example) and an output (assistant response for that question)&lt;/LI&gt;
&lt;LI&gt;
&lt;PRE&gt;{"input": "&amp;lt;example_input&amp;gt;", "output": "&amp;lt;example_output&amp;gt;"&lt;/PRE&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Index examples list in vector store
&lt;UL&gt;
&lt;LI&gt;Index key is the embedded example's input&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Find most relevant examples
&lt;UL&gt;
&lt;LI&gt;Embed the user input using the same embedding used on the previous step&lt;/LI&gt;
&lt;LI&gt;Use the embedded user input to find the most relevant examples in the vector store&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Add examples in the prompt
&lt;UL&gt;
&lt;LI&gt;Adds example's input as a user message and its output as an assistant message&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;In this solution we use:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;the&amp;nbsp;&lt;A href="https://langchain-fanyi.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/similarity.html#:~:text=The%20SemanticSimilarityExampleSelector%20selects%20examples%20based%20on%20which%20examples,have%20the%20greatest%20cosine%20similarity%20with%20the%20inputs." target="_self"&gt;SemanticSimilarityExampleSelector&lt;/A&gt; class from langchain_core package since it already implements most of the steps mentioned above:
&lt;UL&gt;
&lt;LI&gt;Embed examples' input and index them into the vector store&lt;/LI&gt;
&lt;LI&gt;Define how many relevant examples should be returned&lt;/LI&gt;
&lt;LI&gt;Return relevant examples for a given user input&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;InMemoryVectorStore
&lt;UL&gt;
&lt;LI&gt;In-memory implementation of VectorStore using a dictionary&lt;/LI&gt;
&lt;LI&gt;The package langchain-community provides other vector store implementations, like Azure Search&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;AzureOpenAIEmbeddings
&lt;UL&gt;
&lt;LI&gt;Uses an openAI embedding, like&amp;nbsp;text-embedding-ada-002&lt;/LI&gt;
&lt;LI&gt;The package langchain-community provides other embedding implementations, like Text2Vec&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="5"&gt;Code Implementation&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;The implementation of this solution is really simple. We only need 2 files: requirements.txt and main.py. Their contents can be found below:&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;STRONG&gt;requirements.txt&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;langchain-openai==0.1.21
azure-identity==1.17.1
numpy==2.0.1&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;STRONG&gt;main.py&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;# imports
from azure.identity import DefaultAzureCredential
from openai import AzureOpenAI
from langchain_openai import AzureOpenAIEmbeddings
from langchain_core.example_selectors import SemanticSimilarityExampleSelector
from langchain_core.vectorstores import InMemoryVectorStore

# constants
AZURE_OPENAI_ENDPOINT = "&amp;lt;AZURE_OPENAI_ENDPOINT&amp;gt;"
DEPLOYMENT_EMBEDDING = "&amp;lt;DEPLOYMENT_EMBEDDING&amp;gt;"
DEPLOYMENT_CHAT = "&amp;lt;DEPLOYMENT_CHAT&amp;gt;"
OPENAI_API_VERSION = "2023-05-15"

SYSTEM_MESSAGE = 'You are a helpful assistant that uses previous interactions to determine how a questions should be answered. You should try to use those same response formats'
# the examples to be indexed in the vector store. Each example should be in the format `{"input": "&amp;lt;example_input", "output": "&amp;lt;example_output&amp;gt;"}`
EXAMPLES_LIST = [
    # data examples
    {
        "input": "Can you provide the population of the top 5 most populous countries?",
        "output": '''
| Country       | Population (2023) |
|---------------|-------------------|
| China         | 1,425,000,000     |
| India         | 1,417,000,000     |
| United States | 332,000,000       |
| Indonesia     | 276,000,000       |
| Pakistan      | 231,000,000       |
'''
    },
    {
        "input": "What are the top 5 programming languages in 2023 by popularity?",
        "output": '''
| Rank | Programming Language | Popularity (%) |
|------|----------------------|----------------|
| 1    | Python               | 29.9           |
| 2    | JavaScript           | 19.1           |
| 3    | Java                 | 16.2           |
| 4    | C#                   | 8.3            |
| 5    | PHP                  | 6.1            |
'''
    },
    {
        "input": "Can you list the top 5 highest-grossing movies of all time?",
        "output": '''
| Rank | Movie Title             | Gross Revenue (USD) |
|------|-------------------------|---------------------|
| 1    | Avatar                  | $2.923 billion      |
| 2    | Avengers: Endgame       | $2.798 billion      |
| 3    | Titanic                 | $2.195 billion      |
| 4    | Star Wars: The Force Awakens | $2.068 billion |
| 5    | Avengers: Infinity War  | $2.048 billion      |
'''
    },
    {
        "input": "What are the top 5 universities in the world according to the 2023 QS World University Rankings?",
        "output": '''
| Rank | University                        | Location       |
|------|-----------------------------------|----------------|
| 1    | Massachusetts Institute of Technology (MIT) | USA  |
| 2    | University of Cambridge           | UK             |
| 3    | Stanford University               | USA            |
| 4    | University of Oxford              | UK             |
| 5    | Harvard University                | USA            |
'''
    },
    {
        "input": "Can you provide the GDP of the top 5 largest economies in 2023?",
        "output": '''
| Rank | Country       | GDP (USD Trillions) |
|------|---------------|---------------------|
| 1    | United States | 25.3                |
| 2    | China         | 17.7                |
| 3    | Japan         | 4.9                 |
| 4    | Germany       | 4.2                 |
| 5    | India         | 3.5                 |
'''
    },

    # Classification examples
    {
        "input": "Classify the following text: 'The quick brown fox jumps over the lazy dog.'",
        "output": "Sentence"
    },
    {
        "input": "Classify the following text: 'To be, or not to be, that is the question.'",
        "output": "Quote"
    },
    {
        "input": "Classify the following text: 'Once upon a time, in a land far, far away, there lived a young princess.'",
        "output": "Story"
    },
    {
        "input": "Classify the following text: 'E=mc^2 is a formula expressing the relationship between mass and energy.'",
        "output": "Scientific Statement"
    },
    {
        "input": "Classify the following text: 'I hope you have a great day!'",
        "output": "Greeting"
    },

    # Summarization examples
    {
        "input": "Summarize the following paragraph: 'The rapid advancement of technology has significantly impacted various industries. Automation and artificial intelligence are transforming the workforce, leading to increased efficiency but also raising concerns about job displacement. Companies are investing heavily in tech to stay competitive, while governments are grappling with the need to update regulations.'",
        "output": '''
- Technology is transforming industries through automation and AI.
- Increased efficiency comes with concerns about job displacement.
- Companies invest in tech; governments update regulations.
'''
    },
    {
        "input": "Summarize the following paragraph: 'Climate change is one of the most pressing issues of our time. Rising global temperatures are causing more frequent and severe weather events, such as hurricanes and wildfires. Efforts to mitigate climate change include reducing carbon emissions, transitioning to renewable energy sources, and protecting natural habitats.'",
        "output": '''
- Climate change leads to severe weather events.
- Mitigation efforts focus on reducing carbon emissions and using renewable energy.
- Protecting natural habitats is crucial.
'''
    },
    {
        "input": "Summarize the following paragraph: 'The education system is undergoing significant reforms to better prepare students for the future. Emphasis is being placed on critical thinking, problem-solving, and digital literacy. Schools are incorporating more technology into the classroom and offering courses that align with the demands of the modern workforce.'",
        "output": '''
- Education reforms focus on future readiness.
- Key skills: critical thinking, problem-solving, digital literacy.
- More technology and modern workforce-aligned courses in schools.
'''
    },
    {
        "input": "Summarize the following paragraph: 'The healthcare industry is seeing a shift towards personalized medicine. Advances in genetic research allow for treatments tailored to individual patients, improving outcomes and reducing side effects. This approach is particularly promising in the treatment of cancer and rare genetic disorders.'",
        "output": '''
- Healthcare is moving towards personalized medicine.
- Genetic research enables tailored treatments.
- Promising for cancer and rare genetic disorders.
'''
    },
    {
        "input": "Summarize the following paragraph: 'Remote work has become increasingly common, especially after the COVID-19 pandemic. Many companies have adopted flexible work policies, allowing employees to work from home or other locations. This shift has led to changes in workplace dynamics, with a greater emphasis on digital communication and collaboration tools.'",
        "output": '''
- Remote work is more common post-COVID-19.
- Companies adopt flexible work policies.
- Emphasis on digital communication and collaboration tools.
'''
    }
]

# get default azure credentials. You might need to run `az login` first.
credential = DefaultAzureCredential()

# get an openai token using the default credentials. Used to authenticate with openai
openai_token = credential.get_token("https://cognitiveservices.azure.com/.default").token

# create openai client
client = AzureOpenAI(
    azure_endpoint=AZURE_OPENAI_ENDPOINT,
    azure_deployment=DEPLOYMENT_CHAT,
    azure_ad_token=openai_token,
    api_version=OPENAI_API_VERSION
)

# create openai embeddings client
embedding = AzureOpenAIEmbeddings(
    azure_endpoint=AZURE_OPENAI_ENDPOINT,
    deployment=DEPLOYMENT_EMBEDDING,
    azure_ad_token=openai_token,
    openai_api_version=OPENAI_API_VERSION
)

# creates an example selector that uses semantic similarity to retrieve examples
example_selector = SemanticSimilarityExampleSelector.from_examples(
    EXAMPLES_LIST,          # list of examples to index
    embedding,              # embedding model to use
    InMemoryVectorStore,    # vector store to use
    k=3,                    # number of examples to retrieve
    input_keys=["input"],   # keys in the examples that contain the input text
)

while True:
    # gets user input
    user_input = input("Enter a sentence: ")

    # select the most relevant examples based on the user input
    res = example_selector.select_examples({"input": user_input})

    messages = [
        {"role": "system", "content": SYSTEM_MESSAGE},
    ]

    print('retrieved examples:')
    for ex in res:
        # adds each example input as a user message and the corresponding output as an assistant message
        messages.append({"role": "user", "content": ex['input']})
        messages.append({"role": "assistant", "content": ex['output']})
        print('question:{}\nanswer:{}'.format(ex['input'], ex['output']))

    # adds the user input as a user message
    messages.append({"role": "user", "content": user_input})

    # create a completion using the messages
    response = client.chat.completions.create(
        messages=messages,
        model="gpt-4o",
    )

    # print the response from the model
    print('response: ', response.choices[0].message.content)&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="5"&gt;Code Execution&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="4"&gt;Pre-Requisites&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;Python 3.10&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;An OpenAI GPT deployment (we suggest gpt-4o model)&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;An OpenAI Embedding deployment (we suggest&amp;nbsp;text-embedding-ada-002 model)&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;Have permission on the OpenAI service (more details &lt;A href="https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/role-based-access-control" target="_self"&gt;here&lt;/A&gt;)&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;STRONG&gt;Install dependencies&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;PRE&gt;pip install -r requirements.txt&lt;/PRE&gt;
&lt;P&gt;&lt;STRONG&gt;Run&lt;/STRONG&gt;&lt;/P&gt;
&lt;PRE&gt;python main.py&lt;/PRE&gt;
&lt;P&gt;Below you can see the results of running the code with some different types of input. Notice that on the retrieved examples section it only shows 3 examples, which are the most relevants for that input.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;Enter a sentence: list me the biggest airplanes
retrieved examples:
question:Can you list the top 5 highest-grossing movies of all time?
answer:
| Rank | Movie Title             | Gross Revenue (USD) |
|------|-------------------------|---------------------|
| 1    | Avatar                  | $2.923 billion      |
| 2    | Avengers: Endgame       | $2.798 billion      |
| 3    | Titanic                 | $2.195 billion      |
| 4    | Star Wars: The Force Awakens | $2.068 billion |
| 5    | Avengers: Infinity War  | $2.048 billion      |
question:Can you provide the GDP of the top 5 largest economies in 2023?
answer:
| Rank | Country       | GDP (USD Trillions) |
|------|---------------|---------------------|
| 1    | United States | 25.3                |
| 2    | China         | 17.7                |
| 3    | Japan         | 4.9                 |
| 4    | Germany       | 4.2                 |
| 5    | India         | 3.5                 |
question:Can you provide the population of the top 5 most populous countries?
answer:
| Country       | Population (2023) |
|---------------|-------------------|
| China         | 1,425,000,000     |
| India         | 1,417,000,000     |
| United States | 332,000,000       |
| Indonesia     | 276,000,000       |
| Pakistan      | 231,000,000       |

response:  Certainly! Here are some of the biggest airplanes in terms of size and capacity:
| Rank | Aircraft                        | Description                                                                                               |
|------|---------------------------------|-----------------------------------------------------------------------------------------------------------|
| 1    | Antonov An-225 Mriya            | The largest cargo aircraft ever built, featuring 6 engines and a maximum takeoff weight of 640 tons       |
| 2    | Airbus A380                     | The largest passenger airliner, with a double-deck configuration and capacity for up to 853 passengers    |
| 3    | Boeing 747-8                    | One of the largest commercial aircraft, known as the "Queen of the Skies," with a capacity of 410-524 passengers |
| 4    | Lockheed C-5 Galaxy             | A large military transport aircraft used by the U.S. Air Force, with a maximum takeoff weight of 381 tons  |
| 5    | Boeing 777-9                    | The newest variant of the Boeing 777 series, featuring a longer fuselage and higher capacity, accommodating up to 426 passengers |
These aircraft are recognized for their impressive sizes and capabilities, serving various purposes from cargo transport to long-haul passenger flights.

Enter a sentence: classify the following text: F=ma is Newton's second law of motion
retrieved examples:
question:Classify the following text: 'E=mc^2 is a formula expressing the relationship between mass and energy.'
answer:Scientific Statement
question:Classify the following text: 'The quick brown fox jumps over the lazy dog.'
answer:Sentence
question:Classify the following text: 'To be, or not to be, that is the question.'
answer:Quote

response:  Scientific Statement

Enter a sentence: summarize the following text: Exercise offers numerous benefits that significantly enhance one’s quality of life. Regular physical activity helps maintain a healthy weight, reduces the risk of chronic diseases such as heart disease, diabetes, and certain cancers, and improves cardiovascular health. Additionally, exercise boosts mental health by reducing symptoms of depression and anxiety, enhancing mood, and improving sleep quality. It also increases energy levels, strengthens muscles and bones, and promotes better flexibility and balance. Overall, incorporating exercise into your daily routine can lead to a longer, healthier, and more fulfilling life. What kind of exercise do you enjoy the most?
retrieved examples:
question:Summarize the following paragraph: 'The healthcare industry is seeing a shift towards personalized medicine. Advances in genetic research allow for treatments tailored to individual patients, improving outcomes and reducing side effects. This approach is particularly promising in the treatment of cancer and rare genetic disorders.'
answer:
- Healthcare is moving towards personalized medicine.
- Genetic research enables tailored treatments.
- Promising for cancer and rare genetic disorders.
question:Summarize the following paragraph: 'The education system is undergoing significant reforms to better prepare students for the future. Emphasis is being placed on critical thinking, problem-solving, and digital literacy. Schools are incorporating more technology into the classroom and offering courses that align with the demands of the modern workforce.'
answer:
- Education reforms focus on future readiness.
- Key skills: critical thinking, problem-solving, digital literacy.
- More technology and modern workforce-aligned courses in schools.
question:Classify the following text: 'E=mc^2 is a formula expressing the relationship between mass and energy.'
answer:Scientific Statement

response:  - Exercise greatly enhances quality of life.
- Benefits: healthy weight, reduced chronic disease risk, improved cardiovascular and mental health, better sleep, increased energy, stronger muscles and bones, and improved flexibility and balance.
- Leads to a longer, healthier, and more fulfilling life.
- What kind of exercise do you enjoy the most?&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="5"&gt;Next Steps&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Now that you are familiar with dynamic few-shot technique you can try to take it to the next level by:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Use a different vector store&lt;/LI&gt;
&lt;LI&gt;Use a different embedding model&lt;/LI&gt;
&lt;LI&gt;Update the examples/prompt so it can support a new task&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="5"&gt;Conclusion&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The dynamic few-shot prompt technique represents a significant improvement over traditional few-shot learning. By leveraging a vector store and an embedding model, this method ensures that only the most relevant examples are included in the prompt, optimizing its relevance, size and ultimately cost. This approach not only maintains the efficiency and effectiveness of few-shot learning but also enhances the model’s ability to generate accurate and contextually appropriate responses. As a result, users can achieve high-quality outcomes with minimal data, making this technique a powerful tool for a wide range of applications.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="5"&gt;References&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://python.langchain.com/v0.1/docs/use_cases/sql/agents/#using-a-dynamic-few-shot-prompt" target="_self"&gt;Langchain SQL Agent - Using dynamic few-shot prompt&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 04 Sep 2024 21:35:38 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/leveraging-dynamic-few-shot-prompt-with-azure-openai/ba-p/4225235</guid>
      <dc:creator>franklinlindemberg</dc:creator>
      <dc:date>2024-09-04T21:35:38Z</dc:date>
    </item>
    <item>
      <title>Microsoft Fabric Metadata Driven Pipelines with Mirrored Databases</title>
      <link>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/microsoft-fabric-metadata-driven-pipelines-with-mirrored/ba-p/4222480</link>
      <description>&lt;P&gt;Microsoft Fabric's database mirroring feature is a game changer for organizations using Azure SQL DB, Azure Cosmos DB or Snowflake in their cloud environment! Mirroring offers near real-time replication with just a few clicks AND for at no cost!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Features:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;No data movement cost when mirroring&lt;/LI&gt;
&lt;LI&gt;No storage cost for mirrored tables&lt;/LI&gt;
&lt;LI&gt;No consumption of Fabric Capacity Units&lt;/LI&gt;
&lt;LI&gt;Mirror all tables in your source database or just a few, with the capability to add more tables as your Fabric analytics environment grows&lt;/LI&gt;
&lt;LI&gt;Source data continuously replicated with no data pipelines to configure&lt;/LI&gt;
&lt;LI&gt;Data landed in delta tables in One Lake which are optimized by default&lt;/LI&gt;
&lt;LI&gt;SQL endpoint and default semantic model automatically created&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG style="font-family: inherit;"&gt;What can you do?&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Run near real-time queries against the SQL endpoint with no impact to your source system since the data is replicated&lt;/LI&gt;
&lt;LI&gt;Share the mirrored database across your Fabric tenant&lt;/LI&gt;
&lt;LI&gt;Run cross database queries from within the mirrored database SQL endpoint or from One Lake when the mirrored database is shared&lt;/LI&gt;
&lt;LI&gt;Create SQL views over mirrored data, which can include joins or unions with data from other mirrored databases, warehouses or lakehouse SQL endpoints&lt;/LI&gt;
&lt;LI&gt;Configure row-level security and object level security against the mirrored data&lt;/LI&gt;
&lt;LI&gt;Create Direct Lake near real-time Power BI reports against the default semantic model or against a new model&lt;/LI&gt;
&lt;LI&gt;Copy mirrored data into a lakehouse and used in Spark notebooks or Data Science workloads&lt;/LI&gt;
&lt;LI&gt;And… build faster, simpler metadata driven pipelines! Which is the focus of this article!&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Metadata Driven Pipelines with Microsoft Fabric Mirrored Databases&lt;/H3&gt;
&lt;P&gt;Metadata-driven pipelines in Microsoft Fabric enable you to streamline data ingestion and transformations with minimal coding, lower maintenance, and enhanced scalability. And when your source is Azure SQL DB, Azure Cosmos, or Snowflake, this becomes even easier!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Architecture Overview&lt;/H4&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;With Mirroring, the data is replicated and readily available in OneLake. This eliminates the pipeline which brings data into Fabric. After mirroring is configured:
&lt;OL class="lia-list-style-type-lower-alpha"&gt;
&lt;LI&gt;A SQL endpoint is automatically created, allowing you and your users to query the data in near real time without impacting source data performance&lt;/LI&gt;
&lt;LI&gt;A default semantic model is also automatically created, allowing you and your users to create near real time Power BI reports.&amp;nbsp;However, best practice is to create a new semantic model, which provides more features and options than the default semantic model&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;LI&gt;A Fabric Data Warehouse is created to contain:
&lt;OL class="lia-list-style-type-lower-alpha"&gt;
&lt;LI&gt;The fact and dimension tables (star schema) which simplifies and optimizes the analytical data model for SQL querying and semantic models&lt;/LI&gt;
&lt;LI&gt;SQL views and stored procedures used in data transformations&lt;/LI&gt;
&lt;LI&gt;The metadata table which holds the information on how to transform load each fact or dimension table&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;LI&gt;The Fabric Data Pipelines are created and scheduled to perform:
&lt;OL class="lia-list-style-type-lower-alpha"&gt;
&lt;LI&gt;A Lookup activity on the metadata table to get the information on how to load each fact or dimension table in the Fabric Data Warehouse&lt;/LI&gt;
&lt;LI&gt;Copy Data activities for full loads with the source being a SQL view over the mirrored tables and the destination a table in the Fabric Data Warehouse&lt;/LI&gt;
&lt;LI&gt;Stored Procedure activities for incremental loads to merge the latest data from the mirrored data source into the Data Warehouse destination table&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;LI&gt;A default semantic model is automatically created over the Fabric Data Warehouse. But create a new one per best practices&lt;/LI&gt;
&lt;LI&gt;Build Power BI reports for analytical reporting&lt;/LI&gt;
&lt;/OL&gt;
&lt;H4&gt;Why 2 semantic models? Fabric Data Warehouse vs Mirrored Database SQL Endpoint for semantic model reporting&lt;/H4&gt;
&lt;P&gt;For this architecture, I considered using:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Just the mirrored tables&lt;/LI&gt;
&lt;LI&gt;SQL views over the mirrored tables&lt;/LI&gt;
&lt;LI&gt;A new Fabric Data Warehouse with data loaded from the mirrored tables&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Since SQL views over mirrored tables always resort to Direct Query rather than Direct Lake, I decided against building a semantic model over SQL views. Instead, I created a semantic model over the mirrored tables and a separate Fabric Data Warehouse and semantic mode over it.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The semantic model over the mirrored tables:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Is used only by users who understand the source data schema&lt;/LI&gt;
&lt;LI&gt;Allows for near-real time access for data to answer ad-hoc questions that are not analytical in nature such as the order shipment status of a particular order number or current stock availability of a specific product at a certain location&lt;/LI&gt;
&lt;LI&gt;Incur no data storage cost but could be offset by consumption of Capacity Units if reports/queries are complex&lt;/LI&gt;
&lt;LI&gt;Can leverage Power BI Direct Lake connection for faster report performance if the report is not too complex; if the query is too complex, it will resort to Direct Query&lt;/LI&gt;
&lt;LI&gt;Could include other lakehouses, mirrored databases, or data warehouse tables in the SQL endpoint and thus the semantic model but Power BI reports using these tables will always resort to direct query rather than direct lake connect&lt;/LI&gt;
&lt;LI&gt;Can include complex relationships between tables and unexpected results may be returned if the semantic model and/or reports are not configured correctly&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The semantic model over the Fabric Data Warehouse:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Requires scheduled data refreshes but will be relatively fast since the source data is already in Fabric&lt;/LI&gt;
&lt;LI&gt;Is best for more analytical questions such as "What was our sales revenue by month by customer location?" or "What is our days on hand for products in a particular shipping warehouse?"&lt;/LI&gt;
&lt;LI&gt;Allows for user friendly data warehouse table and column names rather than using the potentially cryptic mirrored database table and column names&lt;/LI&gt;
&lt;LI&gt;Eliminates snowflake schema, allowing for better performing reports and delivery of consistent results without having to understand complex relationships and filtering rules&lt;/LI&gt;
&lt;LI&gt;Is more likely to leverage Direct Lake connection in Power BI reports since model is simpler&lt;/LI&gt;
&lt;LI&gt;Allows other data sources to be loaded into the same warehouse, eliminating cross database joins in reports that automatically resort to direct query&lt;/LI&gt;
&lt;LI&gt;Leverages a simpler star schema model resulting in faster reports with less consumption of capacity units&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;This solution addresses two key use cases: providing near real-time responses to queries about specific data transactions and statuses, and delivering rapid analytics over large datasets. Continue reading to learn how to implement this architecture in your own environment.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;Solution details&lt;/H3&gt;
&lt;P&gt;Below are detailed steps to build the metadata pipeline. The data source is the Wide World Importers SQL database, which you can download &lt;A href="https://learn.microsoft.com/en-us/sql/samples/wide-world-importers-oltp-install-configure?view=sql-server-ver16" target="_self"&gt;here&lt;/A&gt;. Then follow &lt;A href="https://learn.microsoft.com/en-us/sql/samples/wide-world-importers-oltp-install-configure?view=sql-server-ver16#azure-sql-database" target="_self"&gt;the instructions&lt;/A&gt; to import into an Azure SQL DB.&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Configure database mirroring&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;From the Synapse Data Warehouse experience, choose the Mirrored Azure SQL DB Option:&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Then choose the tables to mirror:&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;After the mirroring has started, the main canvas will say “Mirrored Azure SQL Database is running’; Click on &lt;STRONG&gt;Monitor&lt;/STRONG&gt; &lt;STRONG&gt;replication &lt;/STRONG&gt;to see the number of rows replicated and the last completion time:&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;At this point, both a SQL analytics endpoint and default semantic model are created (1a). But I created a new semantic model(1b) and set up the table relationships:&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;2.&amp;nbsp;Create a Fabric Data Warehouse&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Create the fact tables or any other tables that will be incrementally loaded (2a). You can also manually create the dimension tables or tables are full loaded OR you can specify to auto-create the tables in the pipeline Copy Data activity, as I do later.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Create the views over the mirrored database tables and stored procedures (2b):&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Create and load the metadata table (2c) with information on how to load each fact or dimension table:&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;3. Create the data pipelines to load the fact and dimension tables&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Below is the orchestrator pipeline:&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;STRONG&gt;Set variable&lt;/STRONG&gt; – set pipelinestarttime to the current date/time. This is logged in the metadata driven pipeline table for each table&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;STRONG&gt;Lookup&lt;/STRONG&gt; – get the table load attributes from metadata table for each table to load&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;STRONG&gt;For each&lt;/STRONG&gt; table to load&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;STRONG&gt;Invoke the pipeline&lt;/STRONG&gt;, passing in the current row object and the date/time the orchestrator pipeline started:&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;STRONG&gt;Load warehouse table pipeline&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;STRONG&gt;Set variable&lt;/STRONG&gt; pipeline start time for tracking the time of each table load&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;STRONG&gt;If activity&lt;/STRONG&gt; – check if full or incremental load&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;STRONG&gt;If full load&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Use Copy Data Activity to load the Data Warehouse table, set the pipeline end time variable and update the metadata table with load information&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;STRONG&gt;Copy data activity&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Source settings reference the view over the mirrored database:&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Destination settings reference data warehouse table:&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Note that the data warehouse table will be dropped and re-created each time&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;STRONG&gt;Set the pipeline end time variable&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;STRONG&gt;Run Script&lt;/STRONG&gt; to update the pipeline run details for this table&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;STRONG&gt;If&amp;nbsp;not a full load&lt;/STRONG&gt;, then run the incremental load activities&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;STRONG&gt;Lookup&lt;/STRONG&gt; activity calls a stored procedure to insert or update new or changed records into the destination table.&amp;nbsp;The value for the StartDate parameter is the latest date of the previous load of this table. The value for the EndDate parameter is usually a null value and only set if there is a need to load or reload a subset of data.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;The stored procedure performs an insert or update, depending upon whether or not the key value exists in the destination. Only the records from the source that have changed since the last table load are selected. &amp;nbsp;This reduces the number of updates performed.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;The stored procedure returns how many rows were inserted or updated, along with the latest transaction data of the data loaded, which is needed for the next incremental load.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;STRONG&gt;Set&lt;/STRONG&gt; the pipeline end time&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;STRONG&gt;Script&lt;/STRONG&gt; activity updates the table load details:&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;4.&amp;nbsp;Build a new semantic model in the Fabric/Power BI service&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Create relationships between the tables, DAX calculations, dimension hierarchies, display formats – anything you need for your analytics&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Note that all tables have Direct Lake connectivity as noted by the dashed, blue line. Direct Lake has the performance of Import semantic models without the overhead of refreshing the data.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;5. Create reports&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Create reports from your semantic model&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Continue on building out more reports and dashboards, setting up security, scheduling data warehouse refresh (which will now be super fast since the source data is already in Fabric), creating apps, adding more data sources – whatever it takes to get the analytics your organization needs into Fabric!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/database/mirrored-database/overview" target="_blank"&gt;Mirroring - Microsoft Fabric | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/data-warehouse/data-warehousing" target="_blank"&gt;What is data warehousing in Microsoft Fabric? - Microsoft Fabric | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/data-factory/" target="_blank"&gt;Data Factory in Microsoft Fabric documentation - Microsoft Fabric | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/training/paths/work-semantic-models-microsoft-fabric/" target="_blank"&gt;Work with semantic models in Microsoft Fabric - Training | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/data-warehouse/reports-power-bi-service" target="_blank"&gt;Create reports in the Power BI - Microsoft Fabric | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/data-warehouse/dimensional-modeling-overview" target="_blank"&gt;Dimensional modeling in Microsoft Fabric Warehouse - Microsoft Fabric | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If your source database is not supported for mirroring (yet!), check out these other articles I wrote:&lt;/P&gt;
&lt;P&gt;&lt;A href="https://techcommunity.microsoft.com/t5/fasttrack-for-azure/metadata-driven-pipelines-for-microsoft-fabric/ba-p/3891651" target="_blank"&gt;Metadata Driven Pipelines for Microsoft Fabric - Microsoft Community Hub&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://techcommunity.microsoft.com/t5/fasttrack-for-azure/metadata-driven-pipelines-for-microsoft-fabric-part-2-data/ba-p/3906749" target="_blank"&gt;Metadata Driven Pipelines for Microsoft Fabric – Part 2, Data Warehouse Style - Microsoft Community Hub&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 26 Aug 2024 21:59:25 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/microsoft-fabric-metadata-driven-pipelines-with-mirrored/ba-p/4222480</guid>
      <dc:creator>jehayes</dc:creator>
      <dc:date>2024-08-26T21:59:25Z</dc:date>
    </item>
    <item>
      <title>Migrating from Azure APIM STv1 to STv2: New Options and Considerations</title>
      <link>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/migrating-from-azure-apim-stv1-to-stv2-new-options-and/ba-p/4227560</link>
      <description>&lt;P&gt;As the support for Azure API Management (APIM) STv1 platform ends on August 31, 2024, it’s crucial for customers to migrate their instances to the STv2 platform. This blog will focus on the new migration options introduced to facilitate this process, as outlined in the attached document.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Why Migrate to STv2?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;With the end of support for STv1, instances on this platform will no longer have a Service Level Agreement (SLA). Migrating to STv2 ensures continued support and access to the latest features and improvements in Azure APIM.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;New Migration Options&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Over the past year, several limitations in the migration process have been addressed to make it easier for instances injected into a virtual network. Here are the key improvements:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Portal Experience&lt;/STRONG&gt;: Enhanced user interface for a smoother migration process.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Public IP Optional&lt;/STRONG&gt;: The service can now provision a managed IP, making the public IP optional.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Retain Old Gateway&lt;/STRONG&gt;: Ability to keep the old gateway for a longer duration for validation purposes.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Release Old Subnet&lt;/STRONG&gt;: Option to release the old subnet sooner for customers who need to revert.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;STRONG&gt;Networking Dependencies&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;One of the biggest challenges in the migration process has been networking dependencies, particularly the need for new subnets and IP changes. The latest migration option addresses this by allowing the retention of original IPs, both public and private.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Key Considerations for the New Migration Option&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Subnet Capacity&lt;/STRONG&gt;: The subnet must have enough capacity to accommodate the STv2 instance. This means the subnet should be at least half empty to allow the creation of a new STv2 gateway alongside the STv1 gateway.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Coexistence of Instances&lt;/STRONG&gt;: If the subnet contains other APIM instances, they should be migrated as soon as possible to avoid conflicts during scaling or update operations.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Subnet Delegation: &lt;/STRONG&gt;The subnet cannot have any deployed or delegated resources. Ensure that any delegations are removed ahead of the migration.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Disable Scaling Rules&lt;/STRONG&gt;: To prevent issues during the migration, disable all scaling rules. The default coexistence period is 15 minutes for external VNet injected instances and 4 hours for internal VNet injected instances. If multiple STv1 instances exist in the same subnet, disable scaling rules across all instances until migration is complete.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Networking Settings&lt;/STRONG&gt;: STv2 requires additional networking settings compared to STv1. Ensure that traffic to Azure is allowed in the existing Network Security Group (NSG), Network Virtual Appliances (NVAs), and other networking controls. This includes:&lt;/LI&gt;
&lt;UL&gt;
&lt;LI&gt;Adding an outbound rule to Azure KeyVault in the NSG.&lt;/LI&gt;
&lt;LI&gt;Adding a service endpoint to Azure KeyVault on the subnet if force tunneling through an NVA.&lt;/LI&gt;
&lt;LI&gt;Allowing traffic to Azure KeyVault from the subnet address space at the NVA.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Migration Options Within the Same Subnet&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Preserve Original IP Addresses&lt;/STRONG&gt;: This option retains the original public and private IPs, but involves downtime while the IPs are transferred from the old gateway to the new gateway.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;New IP Addresses&lt;/STRONG&gt;: This option uses new public IPs, which are pre-created to allow for network dependency adjustments and communication with partners. It also allows specifying the retention time of the old gateway for internal VNet injected instances, providing extended time for validation and updating network dependencies.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Migration Process&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The migration process involves creating a new STv2 gateway alongside the existing STv1 gateway in the same subnet. Here are the detailed steps:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Create New Gateway: &lt;/STRONG&gt;The migration process creates a new STv2 gateway in the same subnet as the old gateway. The old gateway continues to handle traffic using custom DNS settings.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Preserve IP Option:&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;UL&gt;
&lt;LI&gt;The IPs are transferred from the old gateway to the new gateway, resulting in a brief downtime as the IPs will not respond to traffic.&lt;/LI&gt;
&lt;LI&gt;The old gateway is deleted after the migration is successful.&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI&gt;&lt;STRONG&gt;New IP Option:&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;UL&gt;
&lt;LI&gt;Pre-created public IPs are assigned to the new gateway.&lt;/LI&gt;
&lt;LI&gt;The old gateway is retained for 15 minutes for external VNet injected instances and 4 hours for internal VNet injected instances.&lt;/LI&gt;
&lt;LI&gt;Validation activities and DNS updates can be performed during the retention period to achieve a no-downtime migration.&lt;/LI&gt;
&lt;LI&gt;The old gateway is deleted after the retention period elapses.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Additional Considerations&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Infrastructure Configuration Lock&lt;/STRONG&gt;: The infrastructure configuration will be locked for the entire duration of the migration.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Downtime:&lt;/STRONG&gt; The preserve IP option will have a downtime during the IP transfer. The new IP option avoids this downtime by using pre-created public IPs.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Networking:&lt;/STRONG&gt; NSG with the rules for stv2 is required to be attached to the subnet. Existing subnets need to have the additional outbound rule for Azure Key Vault&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Multi Regions:&lt;/STRONG&gt; There is no option to selectively upgrade the locations. A single operation will orchestrate upgrading all the regions one at a time.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Verification and Monitoring&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Verify Networking&lt;/STRONG&gt;: The new UI includes a &lt;EM&gt;Verify&lt;/EM&gt; button to check if the network meets the requirements. This static check looks for NSGs, service endpoints, and DNS configurations but does not check blocks at the NVA level, which need to be verified manually.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Diagnose and Solve Problems&lt;/STRONG&gt;: Additional detectors are available to monitor the status of the migration.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Conclusion&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Migrating from STv1 to STv2 is essential to ensure continued support and access to the latest features in Azure APIM. The new migration options significantly simplify the process by addressing key challenges, particularly networking dependencies. By following the considerations and steps outlined above, customers can achieve a smooth and successful migration.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Refer the learn documentation for the complete list of migration options including the &lt;A href="https://learn.microsoft.com/en-us/azure/api-management/migrate-stv1-to-stv2-vnet#migration-script" target="_blank" rel="noopener"&gt;same subnet migration using Azure CLI&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5" color="#CF3600"&gt;&lt;STRONG&gt;!!!&lt;/STRONG&gt;&lt;/FONT&gt; &lt;STRONG&gt;Note&lt;/STRONG&gt;: The portal experience is not completely available and should be released soon.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Feel free to reach out with any questions or for further assistance with your migration process!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I hope this blog helps you understand the new migration options and considerations for moving from Azure APIM STv1 to STv2. If you have any specific questions or need further details, let me know!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 23 Aug 2024 22:06:40 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/migrating-from-azure-apim-stv1-to-stv2-new-options-and/ba-p/4227560</guid>
      <dc:creator>srinipadala</dc:creator>
      <dc:date>2024-08-23T22:06:40Z</dc:date>
    </item>
    <item>
      <title>Explaining Purview concepts: Domains, Business Domains, Collections, Data Products and Data Assets.</title>
      <link>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/explaining-purview-concepts-domains-business-domains-collections/ba-p/4217157</link>
      <description>&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;Microsoft Purview offers a comprehensive suite of tools for governing your organization's data through the solutions included in Purview Unified Platform.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;To catalog your &lt;STRONG&gt;data assets&lt;/STRONG&gt;, you must first define your &lt;STRONG&gt;data map&lt;/STRONG&gt;, which is composed of &lt;STRONG&gt;collections&lt;/STRONG&gt;. However, you might encounter several related concepts that can be confusing when trying to streamline your organization’s data governance.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;In this article, we will explain the following concepts: Domains, Business Domains, Collections, Data Products, and Data Assets. We will also provide examples of how to use them correctly in your organization.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;SPAN&gt;Why “domains” and “business domains” in Purview?&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;In the Classic Purview experience, an organization can have multiple accounts in its tenant.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;The Unified Purview Portal is intended to manage a single Microsoft Purview resource with multiple domains for your organization, and multiple accounts don’t exist in a single tenant.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;The goal is to replace &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN&gt;multiple tenant accounts&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt; with &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN&gt;multiple domains &lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt;within a single tenant.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;STRONG&gt;&lt;SPAN&gt;Domains&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt; will continue to address the problems that accounts solve today and will also serve as the container for the collections and assets.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;Every Microsoft Purview Data Map starts with a &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN&gt;default domain.&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;Next figure shows a structure for a domain in the new Microsoft Purview Portal/Data Map.&lt;/P&gt;&lt;P class="lia-align-left"&gt;The &lt;STRONG&gt;&lt;EM&gt;+New domain&lt;/EM&gt;&lt;/STRONG&gt; option appears as disabled for many users, but if you are a &lt;STRONG&gt;Purview Administrator&lt;/STRONG&gt; you can add up to four more custom domains in your Microsoft Purview Data Map.&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/purview/how-to-create-and-manage-domains-collections" target="_blank" rel="noopener"&gt;How to manage domains and collections | Microsoft Learn&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When you add a new domain, you can add new collections as well:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="lia-align-left"&gt;In above figure, you can see a &lt;STRONG&gt;Customer Service&lt;/STRONG&gt; Domain and an &lt;STRONG&gt;Operations&lt;/STRONG&gt; domain, in addition to the default domain.&lt;/P&gt;&lt;P class="lia-align-left"&gt;New collections can be added under the custom domains, as in the default domain were.&lt;/P&gt;&lt;P class="lia-align-left"&gt;The new experience uses a single, primary Microsoft Purview account, that represents a tenant-level/organization-wide account. In our example, this is “FabricPurviewDemo”, the name of our Purview Resource subscription, for managing our tenant towards unifying organization's governance, policy, compliance, risk, and security.&lt;/P&gt;&lt;P class="lia-align-left"&gt;If you already have multiple accounts in the classic experience, you'll select a primary account when you upgrade to the new experience. &lt;A href="https://learn.microsoft.com/en-us/purview/new-governance-experience" target="_blank" rel="noopener"&gt;Get ready for the next enhancement in Microsoft Purview governance solutions | Microsoft Learn&lt;/A&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;After you upgrade an existing account to the new experience, all other Microsoft Purview accounts in your tenant will continue to be accessed via the classic portal. Fine grained access control via roles and permissions at collection scopes will continue to function as-is after your accounts are upgraded. In addition, there are new tenant-level roles that can be managed in the new portal.&lt;/P&gt;&lt;P class="lia-align-left"&gt;The Data Map view looks like the next figure shows, with three domains:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="lia-align-left"&gt;This&amp;nbsp;Data Map contains collections under the &lt;STRONG&gt;default domain (FabricPurviewDemo)&lt;/STRONG&gt;.&lt;/P&gt;&lt;P class="lia-align-left"&gt;We have three &lt;STRONG&gt;collections&lt;/STRONG&gt; for &lt;STRONG&gt;scanning data assets&lt;/STRONG&gt; in the desired sources, they are: &lt;STRONG&gt;Diseases, Human&lt;/STRONG&gt; &lt;STRONG&gt;Resources&lt;/STRONG&gt; and &lt;STRONG&gt;Finance&lt;/STRONG&gt;. You can only scan one data source into a single collection.&lt;/P&gt;&lt;P class="lia-align-left"&gt;In this figure you can notice &lt;STRONG&gt;Fabric&lt;/STRONG&gt; as a data source for the &lt;STRONG&gt;Diseases&lt;/STRONG&gt; Collection.&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;STRONG&gt;Human Resources&lt;/STRONG&gt; and &lt;STRONG&gt;Finance&lt;/STRONG&gt; are collections for scanning other data sources.&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;STRONG&gt;Operations&lt;/STRONG&gt; and &lt;STRONG&gt;Customer Services&lt;/STRONG&gt; are domains at the same level as the default domain and they don’t have collections added yet.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;SPAN&gt;Business Domains&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;While &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN&gt;collections&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt; are intended to accept data assets directly from data sources through scanning processes, which is meaningless to business owners, &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN&gt;business domains&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt; provide boundaries where governance policies reach data products. The goal is to empower an enterprise domain owner to manage their data products and concepts and to establish rules for their access, use, and distribution. With this goal in mind, you could establish many types of business domains:&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;SPAN&gt;Fundamental business areas - human resources, sales, finance, supply chain, etc.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;UL&gt;&lt;LI&gt;&lt;SPAN&gt;Overarching subject areas - product, parts, etc.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;UL&gt;&lt;LI&gt;&lt;SPAN&gt;Boundaries based on organizational functions - customer experience, cloud supply chain, business intelligence, etc.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;Business Domains in Microsoft Purview provide a way to curate the data assets that are scanned in the Data Map solution. &lt;/SPAN&gt;&lt;A href="https://www.youtube.com/watch?v=4EsxbnnEAvU" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Curate your data with Business Concepts (youtube.com)&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Go to Data Catalog. Under Data Management, select Business Domains.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Next figure shows several Business Domains defined in our organization.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;Figure above shows the use of &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN&gt;Data estate mappings&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt; to assign collections to the selected Business Domain. You can select more than one collection to manage the scanned data assets of the selected collections as part of this business domain.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;You can further move data assets between business domains as needed.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;Some data sources permit scanning with a scoped approach, while others mandate full scans. Scoped scanning simplifies assigning collections to business domains, reducing the need to relocate assets, as full scans import a larger set of data assets.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;STRONG&gt;&lt;SPAN&gt;Details&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt; tab shows the Name, Type, Owner, Status (published, draft), Data quality score and Health actions about this business domain:&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;The &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN&gt;Roles&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt; tab allows to define business domain owners, business domain readers, data products owners, data steward, data catalog readers, data quality stewards, data quality readers and other profiles.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;To create a Business Domain, press the +New Business domain button. Then you will see a window like that:&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;A business domain must have a name, a description, a type (Functional Unit, Line of business, Data Domain, Regulatory or Project).&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;“Finance” and “Human Resources” are of type “Functional Unit” and are intended to group data assets discovered from Azure Databases containing several master tables about our clients, contracts, workers, incomes and expenses.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;Our business domain "Diseases" is of type "Data Domain" and is intended to group data assets discovered in our Fabric tenant, which contains some reports, semantic models, data pipelines, two data warehouses, and a Lakehouse. All these data assets will be maintained with policies and roles to control access within our organization and allow stakeholders to manage them using medical sciences terms.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;On your business domain's detail page, you can see the&amp;nbsp;&lt;STRONG&gt;Business concepts&lt;/STRONG&gt;&amp;nbsp;section. Here you can see your &lt;EM&gt;data products&lt;/EM&gt;, &lt;EM&gt;glossary terms&lt;/EM&gt;, &lt;EM&gt;objective and key results&lt;/EM&gt; (OKRs),&amp;nbsp;&lt;EM&gt;critical data elements &lt;/EM&gt;and &lt;EM&gt;custom attributes&lt;/EM&gt;.&amp;nbsp;You can go through the cards to create all of them about a specific business domain.&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;Let's continue with these concepts and some examples about them.&lt;/P&gt;&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;SPAN&gt;Data Products&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;A &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN&gt;data product&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt; is a set of data assets discovered in one or more data sources that serve a meaningful business purpose and support end users' specific needs.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;Managing &lt;/SPAN&gt;&lt;I&gt;&lt;SPAN&gt;data as a product&lt;/SPAN&gt;&lt;/I&gt;&lt;SPAN&gt; offers numerous benefits for both businesses and users, among these are: &lt;/SPAN&gt;&lt;SPAN&gt;facilitate cross-functional collaboration and reduce the time and effort to find, process and analyze data by owners.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;The data product provides context for these assets, grouping them under a use case for data consumers. A business domain can house many data products, but a data product is managed by a single business domain. &lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;Next figure shows the details about the Diseases Business Domain at the right.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;Selecting &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN&gt;Go to&lt;/SPAN&gt;&lt;/STRONG&gt; &lt;STRONG&gt;&lt;SPAN&gt;data products&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt; and + &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN&gt;New data product&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt; you can define as many Data Products as you need for that business domain.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/purview/how-to-create-manage-data-products" target="_blank" rel="noopener"&gt;How to create and manage data products (Preview) | Microsoft Learn&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;Next figure shows adding a new Data Product in the Diseases Business Domain.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;In this tab you must define a name, a description, the type and the owner. The selected &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN&gt;type&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt; of our data product can be “Dataset”, according to the type of data assets this data product will contain.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;In the second tab you must define Business Details.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;At this point you can follow a wizard to define the purpose and other details about this data product. The last wizard screen allows you to add data assets from the assigned collections or define the access policy.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;Next figure shows adding data assets:&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;You can filter data by type on the left and add the desired data assets by checking them.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;Depending on the type, Microsoft Purview finds all data assets of the type defined for the Data Product. You can apply filters and move through pages to select the appropriate data assets.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;Next figure shows the Data Product created in the Diseases Business Domain:&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;On the same screen, you can &lt;/SPAN&gt;&lt;SPAN class=""&gt;observe&lt;/SPAN&gt;&lt;SPAN class=""&gt; three data assets, and five glossary terms added to this data product. &lt;STRONG&gt;Glossary terms&lt;/STRONG&gt; are defined for the entire business domain. You must select the &lt;/SPAN&gt;&lt;SPAN class=""&gt;appropriate terms&lt;/SPAN&gt;&lt;SPAN class=""&gt; from this set to be used by users when exploring the organization’s data estate.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&lt;STRONG&gt;OKR (Objective-Key-Result) &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;are defined for the business domain and are linked to Data Products for providing context.&amp;nbsp;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;One OKR (Objective-Key-Result) was defined as “Decrease the number of seek people in most common diseases”. It was linked to "Sick people by diseases and regions".&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;One &lt;STRONG&gt;contact&lt;/STRONG&gt; was defined to be&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class=""&gt;available to users for &lt;/SPAN&gt;&lt;SPAN class=""&gt;ask&lt;/SPAN&gt;&lt;SPAN class=""&gt;ing&lt;/SPAN&gt;&lt;SPAN class=""&gt; about this data product&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&lt;STRONG&gt;Critical Data Elements &lt;/STRONG&gt;are&amp;nbsp;columns from data assets that are critical pieces of information that are necessary for decision making, and so need to be governed with the highest care. They are defined at business domain level in the Business Domain's Details tab, with data quality purpose.&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/purview/how-to-create-manage-critical-data" target="_blank" rel="noopener"&gt;How to create and manage critical data (Preview) | Microsoft Learn&lt;/A&gt;. In our Diseases business domain, one example can be the Patient ID.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;W&lt;/SPAN&gt;&lt;SPAN class=""&gt;e’ve&lt;/SPAN&gt;&lt;SPAN class=""&gt; associated five data assets in &lt;STRONG&gt;Finance Reports&lt;/STRONG&gt; D&lt;/SPAN&gt;&lt;SPAN class=""&gt;ata P&lt;/SPAN&gt;&lt;SPAN class=""&gt;roduct and one &lt;STRONG&gt;Term&lt;/STRONG&gt;, selected from the glossary terms defined in Finance Business Domain:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;See that the above data product is of type “Dashboards/Reports”.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;You can add as many data assets, terms, OKRs and contacts as needed, depending on your organization needs and size, while the data product is unpublished. Owners can make published data products unpublished to add more data assets, terms, OKRs and critical data elements.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;After creation, you can go to &lt;/SPAN&gt;&lt;STRONG&gt;Manage Policies&lt;/STRONG&gt;&lt;SPAN&gt; and then define several policies for accessing a Data Product.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;Come back to “Sick people by diseases and regions” Data Product and press “Manage Policies”.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;Among other features, the Preview Access Form in the Policies allows you to define how other users can request access to this Data Product&lt;/SPAN&gt;&lt;SPAN class=""&gt;, for example:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;You can observe in next figure the list of Data Products we’ve created:&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;&lt;UL class="lia-align-left"&gt;&lt;LI&gt;&lt;SPAN&gt;One data product in Finance Business Domain (“Finance Reports”)&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN&gt;Two data products in Diseases Business Domain (“Medicines” and “Sick people by diseases and regions")&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN&gt;“Team and Members” in the Company Projects Business Domain&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN&gt;“Workers” in Human Resources Business Domain&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;img /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;SPAN&gt;Summarizing concepts&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;STRONG&gt;&lt;SPAN&gt;Data Map&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt; is the solution where you plan your &lt;EM&gt;domains &lt;/EM&gt;and&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;collections&lt;/EM&gt;&lt;SPAN&gt; for accepting &lt;/SPAN&gt;&lt;EM&gt;data assets&lt;/EM&gt;&lt;SPAN&gt; directly from data sources. Scanning process can be scoped or not depending on the data source.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;EM&gt;&lt;STRONG&gt;Domains&lt;/STRONG&gt; &lt;/EM&gt;are managed in the Data Map to define containers for collections and assets.&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;EM&gt;&lt;STRONG&gt;Business domains&lt;/STRONG&gt;&lt;/EM&gt;&lt;SPAN&gt; are defined and mapped with the desired &lt;/SPAN&gt;&lt;EM&gt;&lt;STRONG&gt;Collections&lt;/STRONG&gt;&lt;/EM&gt;&lt;SPAN&gt; in &lt;STRONG&gt;Data Catalog&lt;/STRONG&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;Business Domains&lt;SPAN&gt; are composed by a set of business concepts, they are:&lt;/SPAN&gt;&lt;/P&gt;&lt;UL class="lia-align-left"&gt;&lt;LI&gt;&lt;SPAN&gt;Data Products,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;UL class="lia-align-left"&gt;&lt;LI&gt;&lt;SPAN&gt;Glossary terms,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;UL class="lia-align-left"&gt;&lt;LI&gt;&lt;SPAN&gt;OKRs and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;UL class="lia-align-left"&gt;&lt;LI&gt;&lt;SPAN&gt;Critical Data.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P class="lia-align-left"&gt;&lt;EM&gt;&lt;STRONG&gt;Data products&lt;/STRONG&gt;&lt;/EM&gt;&lt;SPAN&gt; are composed of related &lt;/SPAN&gt;&lt;EM&gt;&lt;STRONG&gt;data assets&lt;/STRONG&gt;&lt;/EM&gt;&lt;SPAN&gt; discovered by scanning sources in the data map (you may need to select data assets from more than one source) and curated in the Data Catalog, &lt;/SPAN&gt;&lt;I&gt;&lt;SPAN&gt;to satisfy end user's needs.&lt;/SPAN&gt;&lt;/I&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;EM&gt;&lt;STRONG&gt;Glossary Terms&lt;/STRONG&gt;&lt;/EM&gt; and &lt;EM&gt;&lt;STRONG&gt;OKRs&lt;/STRONG&gt;&lt;/EM&gt; are defined at business domain level but are linked to Data Products to provide context.&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;EM&gt;&lt;STRONG&gt;Critical Data&lt;/STRONG&gt;&lt;/EM&gt; are defined for managing data quality as key purpose.&amp;nbsp;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;Data Products defined in Diseases Business Domain include several reports, semantic models, data pipelines, and a Lakehouse. These assets reside in a Fabric tenant (the data source) and were selected from the scanned data assets among different Fabric’s workspaces.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-left"&gt;&lt;SPAN&gt;We have curated the data products with policies and roles to control their access by users within our organization, as well as with glossary terms to make these assets more readable and easier to find for &lt;/SPAN&gt;&lt;SPAN&gt;physicians&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;and other stakeholders.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;SPAN&gt;Learn more:&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/purview/concept-domains" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Understand domains in the Microsoft Purview Data Map | Microsoft Learn&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/purview/how-to-create-and-manage-domains-collections#custom-domains" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;How to manage domains and collections | Microsoft Learn&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/purview/concept-best-practices-collections" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Microsoft Purview collections architecture and best practices | Microsoft Learn&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/purview/data-catalog-permissions" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Permissions in new Microsoft Purview Data Catalog preview | Microsoft Learn&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/purview/new-governance-experience" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Get ready for the next enhancement in Microsoft Purview governance solutions | Microsoft Learn&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/purview/concept-business-domain" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Business domains in Microsoft Purview (Preview) | Microsoft Learn&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/purview/concept-data-products" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Data products in Microsoft Purview (Preview) | Microsoft Learn&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/purview/how-to-manage-data-catalog-access-policies" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;How to configure and manage data catalog access policies (Preview) | Microsoft Learn&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 21 Aug 2024 14:36:20 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/explaining-purview-concepts-domains-business-domains-collections/ba-p/4217157</guid>
      <dc:creator>Eduardo_Noriega</dc:creator>
      <dc:date>2024-08-21T14:36:20Z</dc:date>
    </item>
    <item>
      <title>Adopting Public IPv6 for Three-Tier Web Applications</title>
      <link>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/adopting-public-ipv6-for-three-tier-web-applications/ba-p/4211584</link>
      <description>&lt;P&gt;With public IPv4 addresses nearing full allocation, the costs and effort of maintaining IPv4 public IPs for your workloads will only increase. Using IPv6 public addresses can resolve this; there are more of them, and they are more affordable to acquire. Doing so also improves compatibility any IPv6-primary clients, such as IoT devices.&lt;/P&gt;
&lt;P&gt;With Application Gateways now supporting dual-stack configuration, you can use an IPv6 address as your front-end for web applications in Azure. Making this change only impacts the front-end; you do not need to assign any internal IPv6 address space to use this, and you can continue to use an IPv4 front-end where needed..&lt;/P&gt;
&lt;P&gt;This document streamlines the process of exposing your current web applications to the internet via IPv6 while continuing to run IPv4 on your Azure Virtual Machines. This scenario is ideal for those who need IPv6 exposure but do not require full adoption of IPv6 within Azure.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P aria-level="2"&gt;&lt;STRONG&gt;&lt;FONT size="5"&gt;Existing Solution&amp;nbsp;&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The entire environment operates on IPv4 and consists of a single Virtual Network with four subnets:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;AppGwSubnet&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Contains an Application Gateway that acts as the frontend, load balancing traffic to the web servers in the WebSubnet.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;WebSubnet&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Contains two IIS web servers that direct traffic to the AppServer Internal Load Balancer VIP in the AppSubnet, which distributes the load among the AppServers.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;AppSubnet&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Contains two AppServer that direct traffic to the Database Internal Load Balancer VIP in the DataSubnet, which distributes the load among the database servers.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;DataSubnet&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Contains two database servers using Master/Slave replication that respond to queries from the AppServers.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="5"&gt;Step-by-Step Adoption Process&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="4"&gt;&lt;SPAN class="TrackedChange SCXW126438804 BCX8"&gt;&lt;SPAN class="TextRun SCXW126438804 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW126438804 BCX8"&gt;1. &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW126438804 BCX8" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW126438804 BCX8"&gt;Develop an IPv6 address plan and update your virtual network with an IPv6 address space.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp; &amp;nbsp; &lt;STRONG&gt;1a.&lt;/STRONG&gt; Refer to the &lt;/SPAN&gt;&lt;I&gt;&lt;SPAN data-contrast="auto"&gt;Conceptual planning for IPv6 networking&lt;/SPAN&gt;&lt;/I&gt;&lt;SPAN data-contrast="auto"&gt; for guidance on planning your IPv6 networking strategy. For IPv6, it is best practice to deploy a /56 prefix for your Virtual Network and /64 prefixes for your subnets.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/architecture/networking/guide/ipv6-ip-planning" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Conceptual planning for IPv6 networking - Azure Architecture Center | Microsoft Learn&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp; &amp;nbsp; &lt;STRONG&gt;1b.&lt;/STRONG&gt;&amp;nbsp; Add an IPv6 address to your virtual network and to the subnet associated with your Application Gateway to support a dual-stack (IPv4 and IPv6) configuration&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/create-vm-dual-stack-ipv6-portal?tabs=azureportal#create-a-resource-group-and-virtual-network" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Add IPv6 to Virtual Network and Subnet&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;STRONG&gt;&amp;nbsp; &lt;EM&gt;&amp;nbsp; Note:&lt;/EM&gt;&lt;/STRONG&gt;&lt;EM&gt; If your subnet currently hosts an Application Gateway SKU V1, you will need to create a new subnet to deploy a Dual-Stack Application Gateway. However, if you are using Application Gateway SKU V2, you can deploy the Dual-Stack Application Gateway within the same subnet.&lt;/EM&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;FONT size="3"&gt;&amp;nbsp;&lt;/FONT&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN class="TextRun SCXW90904020 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW90904020 BCX8"&gt;2&lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW90904020 BCX8"&gt;. &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW90904020 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW90904020 BCX8"&gt;Deploy a New&lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW90904020 BCX8"&gt; Dual-Stack&lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW90904020 BCX8"&gt; Application Gateway&lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW90904020 BCX8"&gt; and Configure new IPv4 and IPv6 Frontend IPs&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW90904020 BCX8" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp; &amp;nbsp; &lt;STRONG&gt;2a.&lt;/STRONG&gt; Set up a new Application Gateway with dual-stack support to handle both IPv4 and IPv6&amp;nbsp; &amp;nbsp; traffic. During its creation, assign new public frontend IP addresses for both IPv4 and IPv6.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/application-gateway/ipv6-application-gateway-portal#create-an-application-gateway" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Create a Dual-Stack Application Gateway&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/application-gateway/ipv6-application-gateway-portal#frontends-tab" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Create IPv4 and IPv6 Frontend IP's&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp; &amp;nbsp; &lt;STRONG&gt;2b.&lt;/STRONG&gt; Ensure the new Dual-Stack Application Gateway is configured with the same settings as the original. This includes Listeners with TLS Certificates (for HTTPS/TLS offload), Routing Rules with Backend HTTP Settings (including certificates for End-to-End TLS), Backend Pools, and Health Probes.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp; &amp;nbsp; 2c. Both your IPv4 and IPv6 frontend IPs will use the same Web Application backend pool. Ensure the backend pool is healthy before proceeding to the next step.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN class="TextRun SCXW53440177 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW53440177 BCX8"&gt;3&lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW53440177 BCX8"&gt;. &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW53440177 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW53440177 BCX8"&gt;Update Public DNS Records&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW53440177 BCX8" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp; &amp;nbsp; &lt;STRONG&gt;3a&lt;/STRONG&gt;. Update the DNS ‘A’ record to point to the new dual-stack public IPv4 frontend IP address. Similarly, update the DNS ‘AAAA’ record to point to the new dual-stack public IPv6 frontend IP address.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/dns/dns-operations-recordsets-portal#add-a-new-record-to-a-record-set" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Create an Azure Public DNS Record&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;I&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt; If you are using Public DNS in Azure, please follow the link above. If you are using a different Public DNS service, ensure that the records are updated accordingly&lt;/SPAN&gt;&lt;/I&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN class="TextRun BCX8 SCXW21614736" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun BCX8 SCXW21614736"&gt;4&lt;/SPAN&gt;&lt;SPAN class="NormalTextRun BCX8 SCXW21614736"&gt;. &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun BCX8 SCXW21614736" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun BCX8 SCXW21614736"&gt;Decommission the Original Application Gateway&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="TextRun BCX8 SCXW21614736" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun BCX8 SCXW21614736"&gt;&lt;SPAN class="NormalTextRun SCXW120129011 BCX8"&gt;&amp;nbsp; &amp;nbsp; &lt;STRONG&gt;4a.&lt;/STRONG&gt; Once &lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW120129011 BCX8"&gt;you’ve&lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW120129011 BCX8"&gt; updated the DNS records and confirmed that your new IPv4 and IPv6 frontend IP addresses are operational on your dual-stack Application Gateway, you can safely &lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW120129011 BCX8"&gt;delete&lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW120129011 BCX8"&gt; the original IPv4-only Application Gateway.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN class="TextRun BCX8 SCXW21614736" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun BCX8 SCXW21614736"&gt;&lt;SPAN class="NormalTextRun SCXW120129011 BCX8"&gt;Learn More:&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="TextRun BCX8 SCXW21614736" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun BCX8 SCXW21614736"&gt;&lt;SPAN class="NormalTextRun SCXW120129011 BCX8"&gt;Stay Updated on Azure Products Supporting IPv6&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="TextRun BCX8 SCXW21614736" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun BCX8 SCXW21614736"&gt;&lt;SPAN class="NormalTextRun SCXW120129011 BCX8"&gt;&lt;A href="https://learn.microsoft.com/azure/architecture/networking/guide/ipv6-ip-planning#configure-azure-services-to-use-ipv6" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/azure/architecture/networking/guide/ipv6-ip-planning#configure-azure-services-to-use-ipv6&lt;/A&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="TextRun BCX8 SCXW21614736" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun BCX8 SCXW21614736"&gt;&lt;SPAN class="NormalTextRun SCXW120129011 BCX8"&gt;What is IPv6 for Azure Virtual Network?&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="TextRun BCX8 SCXW21614736" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun BCX8 SCXW21614736"&gt;&lt;SPAN class="NormalTextRun SCXW120129011 BCX8"&gt;&lt;A href="https://learn.microsoft.com/azure/virtual-network/ip-services/ipv6-overview" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/azure/virtual-network/ip-services/ipv6-overview&lt;/A&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 14 Aug 2024 16:39:51 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/adopting-public-ipv6-for-three-tier-web-applications/ba-p/4211584</guid>
      <dc:creator>jasonmedina</dc:creator>
      <dc:date>2024-08-14T16:39:51Z</dc:date>
    </item>
    <item>
      <title>Creating a Local Network Virtual Appliance in Azure for Oracle Database@Azure</title>
      <link>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/creating-a-local-network-virtual-appliance-in-azure-for-oracle/ba-p/4218101</link>
      <description>&lt;P&gt;Oracle Database@Azure is an Oracle database service running on Oracle Cloud Infrastructure (OCI), collocated in Microsoft data centers. This ensures that the Oracle Database@Azure service has the fastest possible access to Azure resources and applications. The solution is intended to support the migration of Oracle database workloads to Azure, where customers can integrate and innovate with the breadth of Microsoft Cloud services.&amp;nbsp;For more information and to gain a better understanding of Oracle Database@Azure please visit &lt;A href="https://learn.microsoft.com/en-us/azure/oracle/oracle-db/database-overview" target="_blank" rel="noopener"&gt;Overview - Oracle Database@Azure | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The current Oracle Database@Azure service has a network limitation where it cannot respond to network connections outside of its Azure virtual network (VNet ) when it is expected to route through a firewall. This limitation places constraints on extending integration to Azure services not located within the same Vnet. This issue also impacts network communication from on-premises that need to connect to the Oracle &lt;A href="mailto:Database@Azure" target="_blank" rel="noopener"&gt;Database@Azure&lt;/A&gt;&amp;nbsp;service.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV&gt;
&lt;P&gt;To address this network limitation, the recommended solution is to deploy a Network Virtual Appliance (NVA) within the Oracle Database@Azure VNet. While Microsoft and Oracle are working together to develop an update to the Azure platform that will eliminate this limitation, customers will need to follow this design pattern until the official rollout of the update.&lt;/P&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Deploying an NVA&lt;/H2&gt;
&lt;P&gt;The NVA consists of a Linux virtual machine (VM) and any supported distribution on Azure can be used. The NVA referenced in this article is not a traditional firewall, but a VM acting as a router with IP forwarding enabled and not intended to be an enterprise-scale Firewall NVA.&amp;nbsp; This solution is only expected to help customers bridge the gap until the jointly engineered design pattern is available in all Azure regions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The deployment of the NVA helps solve the specific scenarios outlined below:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Where traffic inspection is required between Oracle Database@Azure and other resources&lt;/LI&gt;
&lt;LI&gt;Where native network support is not available&lt;/LI&gt;
&lt;LI&gt;With resources that have private endpoints&lt;/LI&gt;
&lt;LI&gt;Resources on another Azure virtual network (VNet)&lt;/LI&gt;
&lt;LI&gt;Services with delegated subnets&lt;/LI&gt;
&lt;LI&gt;Connectivity with on-premises&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Additional details on supported network topologies can be found within the following article &lt;A href="https://learn.microsoft.com/en-us/azure/oracle/oracle-db/oracle-database-network-plan" target="_blank" rel="noopener"&gt;Network planning for Oracle Database@Azure | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Scope&lt;/H2&gt;
&lt;P&gt;This article's scope will review a network scenario within an Azure Landing Zone requiring an NVA. The deployment steps of the NVA and other ancillary steps required to complete the end-to-end implementation are included. This article does not cover the hybrid connectivity from on-premises to Azure. That scenario will be covered in a later article; however, both share the same method of using User Defined Routes (UDR).&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;H2&gt;Scenario Review&lt;/H2&gt;
&lt;P&gt;The Azure Landing Zone consists of Hub and Spoke architecture where the application layer is hosted in a Vnet specific for the application front end services, such as web servers. The Oracle Database@Azure is deployed in a separate dedicated Vnet for data. The goal is to provide bidirectional network connectivity between the application layer and the data layer.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Deployment&lt;/H2&gt;
&lt;P&gt;The steps provided in this article should be followed in the designated order to ensure the expected results. Please consult with either your Microsoft or Oracle representative if you have specific questions related to your environment.&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;Environment Overview&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Hub VNet (10.0.0.0/16)&lt;/STRONG&gt;&lt;/LI&gt;
&lt;UL&gt;
&lt;LI&gt;Hub NVA: 10.0.0.4&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Spoke 1 VNet - Application Tier (10.1.0.0/16)&lt;/STRONG&gt;&lt;/LI&gt;
&lt;UL&gt;
&lt;LI&gt;Application Server: 10.1.0.4&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Spoke 2 VNet - Oracle Database (10.2.0.0/16)&lt;/STRONG&gt;&lt;/LI&gt;
&lt;UL&gt;
&lt;LI&gt;Oracle DB Subnet: 10.2.0.0/24&lt;/LI&gt;
&lt;LI&gt;Oracle Database: 10.2.0.4&lt;/LI&gt;
&lt;LI&gt;Local NVA Subnet: 10.2.1.0/24&lt;/LI&gt;
&lt;LI&gt;Local NVA: 10.2.1.4&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt; At the time this article was published, Azure Firewall is currently not supported in this scenario. Third-party NVA’s native support is scheduled for 2024, but subject to change. Third-Party NVA's require this workaround to satisfy the above-mentioned scenario to support network communication until these features are fully implemented on Azure.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Create a Linux VM in Azure as an NVA&lt;/STRONG&gt;&lt;STRONG&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;STRONG&gt;&lt;BR /&gt;Set up a Linux VM&lt;/STRONG&gt; (using any supported distributions on Azure) in the desired resource group and region as the Oracle Database@Azure using your deployment method of choice (for example Azure portal, Azure PowerShell, or Azure CLI). As a security recommendation, be sure to leverage Secure Shell (SSH) public/private keys to ensure secure communication.&lt;BR /&gt;&lt;BR /&gt;Ensure the VM is in the same Vnet, &lt;STRONG&gt;but on a separate subnet&lt;/STRONG&gt; from the Oracle Database@Azure delegated subnet as well as the dedicated Oracle backup subnet if it has been deployed&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Note: &lt;/STRONG&gt;Sizing is very much driven by the actual traffic pattern. Consider how much traffic (volume) packets per second are required to support the implementation. Starting with a 2-core general-purpose VM (D2s_v5 with 2 vCPUs) and 8 GiB (gibibytes) of memory including accelerated networking which can be used to gauge initial performance. High storage/IOPS performance SKUs are not necessary for this use case.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As part of the deployment and monitoring strategy please consult &lt;A href="https://azure.github.io/azure-monitor-baseline-alerts/welcome/" target="_blank" rel="noopener"&gt;Welcome | Azure Monitor Baseline Alerts&lt;/A&gt; for the proper Azure Monitor counters that should be enabled against the NVA to ensure performance and availability.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Enable IP Forwarding on the VM's NIC (Network Interface Cards)&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Go to the &lt;STRONG&gt;Networking&lt;/STRONG&gt; section of the NVA VM in the Azure portal&lt;/LI&gt;
&lt;LI&gt;Select the &lt;STRONG&gt;Network Interface&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Under &lt;STRONG&gt;Settings&lt;/STRONG&gt;, choose &lt;STRONG&gt;IP configurations&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Enable &lt;STRONG&gt;IP forwarding&lt;BR /&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV id="tinyMceEditoradelagarde_0" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV id="tinyMceEditoradelagarde_1" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;STRONG&gt;Enable IP Forwarding at the Operating System level&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;SSH into the VM.&lt;/LI&gt;
&lt;LI&gt;Edit the sysctl configuration file to enable IP forwarding: &lt;STRONG&gt;sudo nano /etc/sysctl.conf&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Uncomment the following line:&lt;STRONG&gt; net.ipv4.ip_forward = 1&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Save and exit out of nano to apply the changes&lt;/LI&gt;
&lt;LI&gt;Run the following command to reset the network status to forward network traffic without a reboot on the VM: &amp;nbsp;&lt;STRONG&gt;sudo sysctl -p and hit enter.&amp;nbsp;&lt;/STRONG&gt;You should see the following line &lt;STRONG&gt;&lt;EM&gt;net.ipv4.ip_forward = 1&lt;/EM&gt; &lt;/STRONG&gt;&amp;nbsp;will appear on the screen indicating the changes were made successfully.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;We now need to implement iptables rules to route traffic properly through the NVA. When using iptables after a reboot Linux systems will lose their iptables rules. In order to avoid that we will install some packages and make some configurations. In the first example we will use either an Ubuntu or Debian Linux distribution. We will only be using IPv4 with the following changes on the Linux system listed in this article.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Ubuntu / Debian Linux system&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Ensure that the local firewall on the NVA is enabled or set to not block traffic. First start by enabling iptables by&amp;nbsp;running the following command&lt;STRONG&gt; sudo systemctl enable iptables &lt;/STRONG&gt;and hit &lt;STRONG&gt;enter. &lt;/STRONG&gt;Then type&amp;nbsp;&lt;STRONG&gt; sudo systemctl start iptables and hit enter.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To list the current iptables rules&amp;nbsp;by running the following command &lt;STRONG&gt;sudo iptables -L&lt;/STRONG&gt; and hit &lt;STRONG&gt;enter&lt;/STRONG&gt;. This will list any possible firewall rules.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Note&lt;/STRONG&gt;: If there are rules disable them with the following command &lt;STRONG&gt;sudo iptables -F &lt;/STRONG&gt;and hit &lt;STRONG&gt;enter.&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We need to install a package called iptables-persistent by typing the following command:&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;On a Ubuntu system type&amp;nbsp;&lt;STRONG&gt;sudo apt install iptables-persistent &lt;/STRONG&gt;and hit enter.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;On a Debian system type&amp;nbsp;&lt;STRONG&gt;sudo apt-get install iptables-persistent &lt;/STRONG&gt;and hit enter.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Make sure services are enabled on Debian or Ubuntu using the systemctl command:&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;sudo systemctl is-enabled netfilter-persistent.service &lt;/STRONG&gt;hit enter.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If not enabled type the following command:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;sudo systemctl enable netfilter-persistent.service &lt;/STRONG&gt;hit enter.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Get the status of the service by running the following command:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;sudo systemctl status netfilter-persistent.service&lt;/STRONG&gt; hit enter&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Enter the following commands line by line and hit enter for each:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;sudo iptables -t nat -A POSTROUTING -j MASQUERADE&amp;nbsp;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;sudo iptables -A INPUT -i eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;sudo iptables -A FORWARD -j ACCEPT&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Validate the iptables rules are in place by typing &lt;STRONG&gt;sudo iptables -L and hit enter.&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The iptables rules applied will be saved and loaded if the system reboots.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the second example we will use either a RedHat Enterprise Linux System (RHEL), Fedora, and AlmaLinux. The system commands are similar for the following Linux distributions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;RHEL/Fedora/AlmaLinux&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Type the following commands line by line and hit enter for each to disable firewalld:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;sudo systemctl stop firewalld.service&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;sudo systemctl disable firewalld.service&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;sudo systemctl mask firewalld.service&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Next, we must install the iptables-services package by either using the native &lt;STRONG&gt;yum&lt;/STRONG&gt; or &lt;STRONG&gt;dnf&lt;/STRONG&gt; package management commands.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The following example uses yum. Type each the following command line by line followed by hitting enter for each:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;sudo yum install iptables-services&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;sudo systemctl enable iptables&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;sudo systemctl enable ip6tables&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;sudo systemctl status iptables&amp;nbsp;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If we use &lt;STRONG&gt;dnf&lt;/STRONG&gt;&amp;nbsp;enter each line by line and hit enter :&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;sudo dnf install iptables-services&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;sudo systemctl enable iptables&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;sudo systemctl enable ip6tables&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;sudo systemctl status iptables&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Once the service is installed, you can configure the /etc/sysconfig/iptables file for IPv4. Any rules added to this file makes them persistent. You can use your favorite editor vi, vim, or nano to edit the file.&amp;nbsp;Add the following line by line and save the file once complete.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;sudo iptables -t nat -A POSTROUTING -j MASQUERADE&amp;nbsp;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;sudo iptables -A INPUT -i eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;sudo iptables -A FORWARD -j ACCEPT&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Next, we need to load the changes that were just made by typing the following command:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;sudo systemctl restart iptables&lt;/STRONG&gt; and then hit enter.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;LI-WRAPPER&gt;&lt;/LI-WRAPPER&gt;&lt;/P&gt;
&lt;P&gt;Ensure that the Network Security Group (NSG) on the NVA is &lt;STRONG&gt;allowing all traffic from the application Vnet and the Oracle &lt;A href="mailto:Database@Azure" target="_blank" rel="noopener"&gt;Database@Azure&lt;/A&gt;&amp;nbsp;delegated subnet.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Configure Route Tables&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Oracle Database@Azure Vnet (Spoke)&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Create a route table&lt;/STRONG&gt; in the Azure portal in the same region and proper resource group (RG) where the Oracle Database@Azure is located. Give it a meaningful name.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Add routes&lt;/STRONG&gt; to the route table:&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Oracle Database Subnet&lt;/STRONG&gt;: Associate the route table with this subnet.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;From Oracle Database Subnet&lt;/STRONG&gt;: Set the next hop for &lt;STRONG&gt;0.0.0.0/0 to the local NVA VM&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Important:&lt;/STRONG&gt; Ensure in the configuration of the route table that all route propagation is &lt;STRONG&gt;disabled&lt;/STRONG&gt;.&amp;nbsp;This setup ensures that all traffic to and from the Oracle Database is forced through your local NVA.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Configure Route Tables for the NVA in the Oracle Database&amp;nbsp;&lt;a href="javascript:void(0)" data-lia-user-mentions="" data-lia-user-uid="73893" data-lia-user-login="azure" class="lia-mention lia-mention-user"&gt;azure&lt;/a&gt; Vnet&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Create another route table&lt;/STRONG&gt; in the Azure portal in the same region and proper resource group (RG) where the Oracle Database@Azure is located. Give it a meaningful name.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Add routes&lt;/STRONG&gt; to the route table:&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;NVA Subnet&lt;/STRONG&gt;: Associate the route table with this subnet.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;From NVA Subnet&lt;/STRONG&gt;: Set the next hop for &lt;STRONG&gt;0.0.0.0/0 to the HUB NVA (&lt;/STRONG&gt;10.0.0.4).&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Important:&lt;/STRONG&gt; Ensure in the configuration of the route table that all route propagation is &lt;STRONG&gt;disabled&lt;/STRONG&gt;.&amp;nbsp;This setup ensures that all traffic to and from the Oracle Database is forced through your local NVA.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Route Configuration Application Tier&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Route to Hub NVA&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Create another route table&lt;/STRONG&gt; in the Azure portal in the same region and proper resource group (RG) where the Oracle Database@Azure is located. Give it a meaningful name.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Application Subnet:&lt;/STRONG&gt; Attach the route table to the Application Subnet in the application Vnet.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Route from Application Vnet&lt;/STRONG&gt;: Destination: 10.2.0.0/24 (Oracle Database Subnet) Next Hop: 10.0.0.4 (Hub NVA)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Important:&lt;/STRONG&gt; Ensure in the configuration of the route table that all route propagation is &lt;STRONG&gt;disabled&lt;/STRONG&gt;. This setup ensures that all traffic to and from the Oracle Database is forced through your local NVA.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Route Configuration Hub VNet&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Route to Local NVA:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Create another route table&lt;/STRONG&gt; in the Azure portal in the same region and proper resource group (RG) where the Oracle Database@Azure is located. Give it a meaningful name.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Firewall Subnet: &lt;/STRONG&gt;Attach the route table to the Firewall Subnet in the Hub Vnet.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;From Firewall Subnet&lt;/STRONG&gt;: Set the next hop 10.2.0.0/24 (Oracle Subnet) to 10.2.1.4 (Local NVA)&lt;/LI&gt;
&lt;LI&gt;Please ensure if you have a Cisco or Palo Alto or other third-party NVA’s that there are no internal static routes that may conflict with the custom route table from Azure. &amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Important:&lt;/STRONG&gt; Ensure in the configuration of the route table that all route propagation is &lt;STRONG&gt;disabled&lt;/STRONG&gt;. This setup ensures that all traffic to and from the Oracle Database is forced through your local NVA.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When finished the implementation network flow and environment should match the following diagram:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Testing&lt;/H3&gt;
&lt;P&gt;The next step is to start testing by initiating a connection from the application servers. Make sure the proper components have been installed on the application servers to connect to the Oracle &lt;A href="mailto:Database@Azure" target="_blank"&gt;Database@Azure&lt;/A&gt;&amp;nbsp;before validating connectivity. Validate that the application servers can connect to the Oracle &lt;A href="mailto:Database@Azure" target="_blank" rel="noopener"&gt;Database@Azure&lt;/A&gt;.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you need to troubleshoot deploy a test Linux vm on the application subnet to test connectivity. Install the mtr package as a tool on the Linux test vm.&amp;nbsp;Do not rely on ping (ICMP) to troubleshoot as this will not properly test connectivity within Azure.&amp;nbsp;An example of the command using mtr would be the following: &lt;STRONG&gt;sudo mtr -T -n -P 1521 10.2.0.4&lt;/STRONG&gt;. This example starts a trace attempting to connect to the database without using ICMP. The network port of 1521 is selected which the database listens on for connections. Review the route tables and IP addresses were entered correctly if a problem is identified. If the initial tests are successful, you have implemented this solution correctly.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Next Steps&lt;/H3&gt;
&lt;P&gt;Please visit the Microsoft Cloud Adoption Framework (CAF ) &lt;A href="https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/scenarios/oracle-iaas/" target="_blank" rel="noopener"&gt;Introduction to Oracle on Azure adoption scenarios - Cloud Adoption Framework | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Authors&lt;/STRONG&gt;&lt;BR /&gt;Moises Gomez Cortez&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;Technical Editor and Content Contributor&lt;/STRONG&gt;&lt;BR /&gt;Anthony de Lagarde, Erik Munson&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 26 Sep 2024 13:03:44 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/creating-a-local-network-virtual-appliance-in-azure-for-oracle/ba-p/4218101</guid>
      <dc:creator>adelagarde</dc:creator>
      <dc:date>2024-09-26T13:03:44Z</dc:date>
    </item>
    <item>
      <title>Securing Microsoft Fabric: User Authentication &amp; Authorization Guidelines</title>
      <link>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/securing-microsoft-fabric-user-authentication-amp-authorization/ba-p/4210273</link>
      <description>&lt;P&gt;Did you wonder what are the options to &lt;FONT size="3"&gt;define u&lt;/FONT&gt;sers and permissions to access and operate in Microsoft Fabric?&lt;/P&gt;
&lt;P&gt;Considering Conditional Access for Fabric users?&lt;/P&gt;
&lt;P&gt;Looking to understand the best practices to define user roles in workspace level?&lt;/P&gt;
&lt;P&gt;In this blog, we will talk about authentication and authorization options in Fabric including use case example.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/get-started/microsoft-fabric-overview" target="_blank" rel="noopener"&gt;Microsoft Fabric&lt;/A&gt;&amp;nbsp;is a software as a service (SaaS) platform that lets users get, create, share, and visualize data.&lt;/P&gt;
&lt;P&gt;Security is a top priority for any organization that wants to succeed in the digital age. You need to safeguard your assets from threats and follow your organization's security policies.&lt;/P&gt;
&lt;P&gt;One of the security design principles is Implement strong access controls that authenticate and authorize access to the system.&lt;/P&gt;
&lt;P&gt;This blog describes recommendations for authenticating and authorizing users that are attempting to access Microsoft Fabric.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;First, let understand Microsoft Fabric main infrastructure components with this diagram:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this diagram, we can see the different components in Fabric and the possible relation:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Tenant -&amp;nbsp;A tenant is a single instance of Fabric for an organization and is aligned with a Microsoft Entra ID.&lt;/P&gt;
&lt;P&gt;OneLake -&amp;nbsp; a single, unified, logical data lake for your whole organization. A data Lake processes large volumes of data from various sources. OneLake is the foundation of Microsoft Fabric for your tenant. There is one OneLake per Tenant.&lt;/P&gt;
&lt;P&gt;Capacity - Capacity is a dedicated set of resources that is available at a given time to be used. Capacity defines the ability of a resource to perform an activity or to produce output. Different items consume different capacities at a certain time. Fabric offers capacity through the &lt;A href="https://learn.microsoft.com/en-us/fabric/enterprise/fabric-features#features-list" target="_blank" rel="noopener"&gt;Fabric SKU and Trials&lt;/A&gt;. You can have multiple capacities for one OneLake.&lt;/P&gt;
&lt;P&gt;Workspace - A workspace is a collection of items that bring together different functionality in a single environment designed for collaboration. It acts as a container that uses capacity for the work that is executed, and provides controls for who can access the items in it. For example, in a workspace, users create reports, notebooks, semantic models, etc. You can have multiple workspaces at one capacity.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now, let's examine Entra ID authentication and authorization on top of the layers.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Authentication&lt;/H3&gt;
&lt;P&gt;Microsoft Entra tenant provides identity and access management (IAM) capabilities to applications and resources used by your organization. Since Fabric is deployed to a &lt;A href="https://learn.microsoft.com/en-us/microsoft-365/education/deploy/intro-azure-active-directory#what-is-an-azure-ad-tenant" target="_blank" rel="noopener"&gt;Microsoft Entra tenant&lt;/A&gt;, authentication and authorization are handled by Microsoft Entra.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;U&gt;Access token authentication:&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;Fabric relies on Microsoft Entra ID to &lt;A href="https://learn.microsoft.com/en-us/fabric/security/security-fundamentals#authentication" target="_blank" rel="noopener"&gt;authenticate&lt;/A&gt; users or service principals, when authenticated, users or service principals receive&amp;nbsp;access tokens&amp;nbsp;from Microsoft Entra ID. Fabric uses these tokens to perform operations in the context of the user or application.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Example:&lt;/STRONG&gt; When a user logs into Fabric, they are authenticated by Microsoft Entra ID and receive an access token. This token is then used to access various resources within Fabric.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;I&gt;Note:&lt;/I&gt;An identity is a directory object authenticated and authorized for access to a resource. There are identity objects for human (users) and nonhuman identities. Human identities are referred to as identities, and nonhuman identities are workload identities. Nonhuman entities include application objects, Service Principals, managed identities, and devices. Generally, workload identity is for a software entity to authenticate with a system. This blog describes authentication options for users/humans while service principal/nonhuman is out of scope.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;U&gt;Non-Access Token authentication:&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;Non-access token activity in Fabric enables you to utilize external data sharing. Fabric &lt;A href="https://learn.microsoft.com/en-us/fabric/governance/external-data-sharing-overview" target="_blank" rel="noopener"&gt;external data sharing&lt;/A&gt; is a feature that allows Fabric users to share data from their tenant with users in another Fabric tenant.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Example:&lt;/STRONG&gt; If you enable external data sharing, you are explicitly trusting other tenants, allowing them to access the shared data without complying with your Entra Conditional Access Policy. To enforce CA Policy for all cases, it is recommended to &lt;A href="https://learn.microsoft.com/en-us/fabric/governance/external-data-sharing-enable#enable-external-data-sharing-in-the-consuming-tenant" target="_blank" rel="noopener"&gt;turn off external data sharing&lt;/A&gt; at the tenant level unless there is a specific need to use such external data.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Secured authentication via Conditional Access (CA)&lt;/H3&gt;
&lt;P&gt;A key feature of Microsoft Entra ID is&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/fabric/security/security-conditional-access" target="_blank" rel="noopener"&gt;conditional access&lt;/A&gt;. Conditional access ensures that customers can secure apps in their tenants, including:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Multifactor authentication&lt;/LI&gt;
&lt;LI&gt;Allowing only Intune enrolled devices to access specific services&lt;/LI&gt;
&lt;LI&gt;Restricting user locations and IP ranges&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;To &lt;A href="https://learn.microsoft.com/en-us/fabric/security/security-conditional-access#configure-conditional-access-for-fabric" target="_blank" rel="noopener"&gt;configure conditional access for users in Fabric&lt;/A&gt;, you need to select several Azure services: Power BI, Azure Data Explorer, Azure SQL Database, and Azure Storage.&lt;/P&gt;
&lt;P&gt;Setup of &lt;STRONG&gt;Power BI, Azure Data Explorer, Azure SQL Database, and Azure Storage&lt;/STRONG&gt; block any &lt;STRONG&gt;access token activity&lt;/STRONG&gt; in Fabric incase CA policy fails.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Authorization&lt;/H3&gt;
&lt;P&gt;All Fabric permissions are stored centrally by the metadata platform. Fabric services query the metadata platform on demand in order to retrieve authorization information and to authorize and validate user requests.&lt;/P&gt;
&lt;P&gt;In this blog, we will focus on Workspace level permissions, you can read more on Item permissions &lt;A href="https://learn.microsoft.com/en-us/fabric/security/permission-model#item-permissions" target="_blank" rel="noopener"&gt;here&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Authorization in Workspace level&lt;/H3&gt;
&lt;P&gt;Organizational teams can have individual workspaces where different personas collaborate and work on generating content. Access to the items in the workspace is regulated via workspace roles assigned to users by the workspace admin.&lt;/P&gt;
&lt;P&gt;You can either assign roles to individuals or to groups.&lt;/P&gt;
&lt;P&gt;Guidance: Data owners should recommend users who could be workspace administrators. These could be team leaders in your organization, for example. These workspace administrators should then govern access to the items in their workspace by assigning appropriate workspace roles to users and consumers of the items.&lt;/P&gt;
&lt;P&gt;There are four Workspace roles and they apply to all items within the workspace. Users that don't have any of these roles can't access the workspace. The roles are:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Viewer&lt;/STRONG&gt;&amp;nbsp;- Can view all content in the workspace but can't modify it.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Contributor&lt;/STRONG&gt;&amp;nbsp;- Can view and modify all content in the workspace.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Member&lt;/STRONG&gt;&amp;nbsp;- Can view, modify, and share all content in the workspace.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Admin&lt;/STRONG&gt;&amp;nbsp;- Can view, modify, share, and manage all content in the workspace, including managing permissions.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;You can define workspace level access in Fabric via:&lt;/P&gt;
&lt;P&gt;UI as explained in&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/fabric/get-started/give-access-workspaces" target="_blank" rel="noopener"&gt;Give users access to workspaces via UI&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;API as explained in&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/rest/api/fabric/core/workspaces/add-workspace-role-assignment?tabs=HTTP" target="_blank" rel="noopener"&gt;Add Workspace Role Assignment via Fabric API&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This diagram demonstrates the Authentication and Authorization options described above:&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;SPAN&gt;What can you do to manage access easily and efficiently?&lt;/SPAN&gt;&lt;/H3&gt;
&lt;DIV style="direction: ltr; border-width: 100%;"&gt;
&lt;DIV style="direction: ltr; margin-top: 0in; margin-left: 0in; width: 16.5104in;"&gt;
&lt;DIV style="direction: ltr; margin-top: 0in; margin-left: 0in; width: 16.5104in;"&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/entra/fundamentals/concept-learn-about-groups" target="_blank" rel="noopener"&gt;Microsoft Entra groups&lt;/A&gt; are used to manage users that all need the same access and permissions to resources, such as potentially restricted apps and services. Instead of adding special permissions to individual users, you create a group that applies the special permissions to every member of that group.&lt;/P&gt;
&lt;P&gt;When user leave the organization, you can easily remove the user form the group, that will remove user access to the different workspaces.&lt;/P&gt;
&lt;P style="margin: 0in; font-family: Calibri; font-size: 11.0pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; font-family: Calibri;"&gt;&lt;FONT size="3"&gt;&lt;SPAN&gt;Let look on one example: &lt;/SPAN&gt;&lt;STRONG&gt;&lt;EM&gt;Organization with multiple users granting access to Fabric dev, test and prod workspaces via Entra groups.&lt;/EM&gt;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P style="margin: 0in; font-family: Calibri; font-size: 11.0pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;U&gt;The first step&lt;/U&gt; will be to define groups for Fabric users in &lt;A href="https://learn.microsoft.com/en-us/microsoft-365/admin/admin-overview/admin-center-overview?view=o365-worldwide" target="_blank" rel="noopener"&gt;Microsoft 365 admin center&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;Potential groups implementation:&lt;/P&gt;
&lt;P&gt;1.Fabric all - include all Fabric users that will get access to Fabric&lt;/P&gt;
&lt;P&gt;2.Development + Test users:&lt;/P&gt;
&lt;UL type="disc"&gt;
&lt;LI&gt;Fabric dev+test workspace viewers - assign users per need&lt;/LI&gt;
&lt;LI&gt;Fabric dev+test workspace members - assign users per need&lt;/LI&gt;
&lt;LI&gt;Fabric dev+test workspace contributors - assign users per need&lt;/LI&gt;
&lt;LI&gt;Fabric dev+test workspace admins - potentially assign to workspace owners + team leads&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;3.Production users:&lt;/P&gt;
&lt;UL type="disc"&gt;
&lt;LI&gt;Fabric prod workspace viewers - assign users per need&lt;/LI&gt;
&lt;LI&gt;Fabric prod workspace members - assign users per need&lt;/LI&gt;
&lt;LI&gt;Fabric prod workspace contributors - assign users per need&lt;/LI&gt;
&lt;LI&gt;Fabric prod workspace admins - potentially assign to workspace owners + team leads&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Here you can see sample groups from Microsoft 365 admin center:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;U&gt;The next steps will be:&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;Define Conditional Access policy for Fabric users, via the group we created before - Fabric all, as explained in &lt;A href="https://learn.microsoft.com/en-us/fabric/security/security-conditional-access#configure-conditional-access-for-fabric" target="_blank" rel="noopener"&gt;Conditional access - Microsoft Fabric | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Define workspace level access in Fabric UI as explained in&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/fabric/get-started/give-access-workspaces" target="_blank" rel="noopener"&gt;Give users access to workspaces&lt;/A&gt; or via API as explained in&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/rest/api/fabric/core/workspaces/add-workspace-role-assignment?tabs=HTTP" target="_blank" rel="noopener"&gt;Workspaces - Add Workspace Role Assignment&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Here we can see sample UI of assigning Admin rights on the workspace via the group we created before:&lt;/P&gt;
&lt;P&gt;1.Go to Manage Access&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;2.Add the relevant group:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;3.Assign permission to the group:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In &lt;STRONG&gt;conclusion&lt;/STRONG&gt;, implementing strong access controls to authenticate and authorize users is a crucial security design principle for any organization using Microsoft Fabric. Understanding the different components and layers of the platform can aid in the configuration of authentication and authorization options, such as Entra ID and Conditional Access policies.&lt;/P&gt;
&lt;UL type="disc"&gt;
&lt;LI&gt;Conditional Access provides an additional layer of security and requires Microsoft Entra ID P1 licenses.&lt;/LI&gt;
&lt;LI&gt;Authorization at the workspace level is regulated via workspace roles, which can be assigned to users or groups.&lt;/LI&gt;
&lt;LI&gt;It is recommended that data owners assign workspace administrators and govern access to the workspace by assigning appropriate roles to users and consumers of the items&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Additional reference: End-to-end security overview for Fabric can be found &lt;A href="https://learn.microsoft.com/en-us/fabric/security/white-paper-landing-page" target="_blank" rel="noopener"&gt;here&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Assaf &amp;amp; Inbal.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;</description>
      <pubDate>Mon, 19 Aug 2024 11:50:18 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/securing-microsoft-fabric-user-authentication-amp-authorization/ba-p/4210273</guid>
      <dc:creator>inbalsilis</dc:creator>
      <dc:date>2024-08-19T11:50:18Z</dc:date>
    </item>
    <item>
      <title>Recover Multiple VMs from Azure Backup in Less Time</title>
      <link>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/recover-multiple-vms-from-azure-backup-in-less-time/ba-p/4208530</link>
      <description>&lt;P class="lia-align-left"&gt;In the dynamic world of cloud computing, time is often a critical factor, especially when it comes to recovering from disasters like ransomware attacks or rolling back after a problematic security update. Imagine waking up to find your entire set of Azure VMs compromised by ransomware or discovering that a recent security update has left your systems inoperable. The clock starts ticking, and the longer it takes to restore your VMs, the greater the impact on your business.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;The Problem&lt;/H2&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/azure/backup/backup-overview" target="_self"&gt;Azure Backup&lt;/A&gt; is a robust solution for protecting your Azure VMs, providing peace of mind with its ability to create and manage backup policies, configure backup schedules, and perform reliable restores. However, the features available for Azure Backup in the Portal UI today only allow you to start the&amp;nbsp;&lt;A href="https://learn.microsoft.com/azure/backup/backup-support-matrix-iaas#supported-restore-methods" target="_self"&gt;restoration of individual VMs one at a time&lt;/A&gt;&amp;nbsp;in a sequence of repeated steps. Restoring many VMs manually through the Azure Portal can be extremely time-consuming and inefficient, especially when you need to restore an entire set of VMs quickly.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Native Capabilities of Azure Backup&lt;/H2&gt;
&lt;P&gt;Azure Backup offers extensive features for protecting your VMs:&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/azure/backup/backup-architecture#backup-policy-essentials" target="_blank" rel="noopener"&gt;Automated backup schedules and retention policies&lt;/A&gt; - A backup policy can protect multiple Azure VMs consisting of a schedule (daily/weekly) and retention (daily, weekly, monthly, yearly).&lt;/LI&gt;
&lt;LI&gt;&lt;A style="font-family: inherit; background-color: #ffffff;" href="https://learn.microsoft.com/azure/backup/backup-create-recovery-services-vault#set-cross-region-restore" target="_blank" rel="noopener"&gt;Cross-region restore capabilities&lt;/A&gt;&lt;SPAN&gt; – Allows you to restore data in a secondary, Azure paired region. This can be useful to conduct BCDR drills or if there’s a disaster in the primary region.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A style="font-family: inherit; background-color: #ffffff;" href="https://learn.microsoft.com/azure/backup/security-overview" target="_blank" rel="noopener"&gt;Ransomware protection&lt;/A&gt;&lt;SPAN&gt; – Features such as irreversible soft-delete, immutable storage and multi-user authorization can be set at the vault level to safeguard backup data.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Despite these powerful features, there is always room for improvement. For now, there is currently no feature in the Azure Portal to start a batch restore multiple VMs simultaneously. This limitation becomes a bottleneck in scenarios where speed and efficiency are paramount.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Az PowerShell Module – A Good Solution&lt;/H2&gt;
&lt;P&gt;To address this gap, we turn to PowerShell scripting, a versatile and powerful tool for managing Azure resources. Microsoft's &lt;A href="https://learn.microsoft.com/powershell/azure/what-is-azure-powershell" target="_blank" rel="noopener"&gt;Az PowerShell module&lt;/A&gt; provides a comprehensive suite of cmdlets to automate and manage Azure tasks, including VM backups and restores.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here's a practical approach: a PowerShell script that enables the parallel restoration of multiple VMs from Azure Backup. This script leverages &lt;A href="https://learn.microsoft.com/en-us/powershell/module/az.recoveryservices/restore-azrecoveryservicesbackupitem?view=azps-12.1.0" target="_blank" rel="noopener"&gt;Az.RecoveryServices&lt;/A&gt; to streamline the recovery process, significantly reducing the time required to get your systems back online.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Sample PowerShell Script&lt;/H3&gt;
&lt;P&gt;Below is a summary of how one example script works. You can find the full example &lt;A href="https://gist.github.com/sdolgin/9d873162f727939ba6e1bf7c939dfc44" target="_blank" rel="noopener"&gt;here&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Define Variables:&lt;BR /&gt;The script starts by defining the necessary variables, including the resource group, recovery services vault, and cache storage account.&lt;BR /&gt;&lt;LI-CODE lang="powershell"&gt;$resourceGroup = "rg-webservers"
$recoveryServicesVault = "rsv-vmbackup"
$cacheStorageAccount = "unique626872"
$cacheStorageAccountResourceGroup = "rg-webservers"&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Get the Vault and Backup Container:&lt;BR /&gt;It then retrieves the &lt;A href="https://learn.microsoft.com/powershell/module/az.recoveryservices/get-azrecoveryservicesvault" target="_blank" rel="noopener"&gt;Recovery Services Vault&lt;/A&gt; and &lt;A href="https://learn.microsoft.com/powershell/module/az.recoveryservices/get-azrecoveryservicesbackupcontainer" target="_blank" rel="noopener"&gt;Backup Container&lt;/A&gt; containing multiple VMs that need to be restored in parallel.&lt;BR /&gt;&lt;LI-CODE lang="powershell"&gt;$vault = Get-AzRecoveryServicesVault -ResourceGroupName $resourceGroup -Name $recoveryServicesVault
$container = Get-AzRecoveryServicesBackupContainer -ContainerType "AzureVM" -VaultId $vault.ID​&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Loop Through the Backup Items:&lt;BR /&gt;The script iterates over each backup item in the container, performing a shutdown and kicking off an in-place restore from the latest recovery points.&lt;BR /&gt;&lt;LI-CODE lang="powershell"&gt;foreach ($item in $container)
{
    # Write out the Backup Item Name
    Write-Host "Backup Item Name: $($item.FriendlyName)"

    # Get the Backup Item from the Vault
    $backupItem = Get-AzRecoveryServicesBackupItem -BackupManagementType "AzureVM" -WorkloadType "AzureVM" -VaultId $vault.ID -Name $item.Name

    # Get the resource group &amp;amp; VM name for this backupItem
    $vmResourceGroup = $backupItem.VirtualMachineId.Split('/')[4]
    $vmName = $backupItem.VirtualMachineId.Split('/')[8]

    # Shut down the protected VM before restoring it
    Write-Host "Stopping VM: $vmName in Resource Group: $vmResourceGroup before restoring"
    Stop-AzVM -ResourceGroupName $vmResourceGroup -Name $vmName -Force -SkipShutdown -NoWait
   
    # Get the latest recovery point for the backup item from the last 7 days
    $recoveryPoints = Get-AzRecoveryServicesBackupRecoveryPoint -Item $backupItem -VaultId $vault.ID -StartDate (Get-Date).AddDays(-7).ToUniversalTime() -EndDate (Get-Date).ToUniversalTime()

    # Write details about the latest recovery point
    Write-Host "Found $($recoveryPoints.Count) Recovery Points for the Backup Item"
    Write-Host "Latest Recovery Point Time: $($recoveryPoints[0].RecoveryPointTime)"

    # Extract necessary properties from the recovery point
    $recoveryPointId = $recoveryPoints[0].RecoveryPointId

    # Restore the Azure VM from the latest recovery point to the original location (replace the source VM)
    # To speed up the process we run the VM restores in parallel using PowerShell jobs
    Write-Host "Restoring $($item.FriendlyName) to the original location"
    $job = Start-Job -ScriptBlock {
        param ($recoveryPointId, $backupItemName, $vaultId, $vaultLocation, $cacheStorageAccount, $cacheStorageAccountResourceGroup)
        
        # Retrieve the backup item and recovery point within the job
        $vault = Get-AzRecoveryServicesVault -ResourceGroupName $vaultId.ResourceGroupName -Name $vaultId.Name
        $backupItem = Get-AzRecoveryServicesBackupItem -BackupManagementType "AzureVM" -WorkloadType "AzureVM" -VaultId $vault.ID -Name $backupItemName
        $recoveryPoint = Get-AzRecoveryServicesBackupRecoveryPoint -Item $backupItem -VaultId $vault.ID | Where-Object { $_.RecoveryPointId -eq $recoveryPointId }

        Restore-AzRecoveryServicesBackupItem -RecoveryPoint $recoveryPoint -StorageAccountName $cacheStorageAccount -StorageAccountResourceGroupName $cacheStorageAccountResourceGroup -VaultId $vault.ID -VaultLocation $vault.Location
    } -ArgumentList $recoveryPointId, $item.Name, $vault, $vault.Location, $cacheStorageAccount, $cacheStorageAccountResourceGroup

    # Store the job information
    $jobs += $job

    # Write out the Job ID
    Write-Host "Started Restore Job ID: $($job.Id)"
}&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;By using this script, you can dramatically reduce the time required to restore multiple VMs, enhancing your ability to recover from critical incidents swiftly.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Conclusion&lt;/H2&gt;
&lt;P&gt;In this post, we've discussed the limitations in the Azure Portal for handling multiple VM restores and introduced a practical workaround using PowerShell scripting. This solution enables you to restore your VMs in parallel, significantly cutting down recovery time and minimizing business disruption.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We encourage you to modify the &lt;A href="https://gist.github.com/sdolgin/9d873162f727939ba6e1bf7c939dfc44" target="_self"&gt;sample script&lt;/A&gt;, try it out in your test environment and see the benefits for yourself. Your feedback is invaluable, so please share your experiences and let us know your thoughts on this approach. Together, we can continue to improve and innovate in the realm of Azure Business Continuity.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Call to Action&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Modify the sample script and try it out in your test environment for parallel VM restoration.&lt;/LI&gt;
&lt;LI&gt;Share your feedback and experiences with us.&lt;/LI&gt;
&lt;LI&gt;Stay tuned for more tips and tricks on maximizing your Azure capabilities.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Happy scripting, and may your recoveries be swift and seamless!&lt;/P&gt;</description>
      <pubDate>Sat, 03 Aug 2024 11:25:10 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/fasttrack-for-azure/recover-multiple-vms-from-azure-backup-in-less-time/ba-p/4208530</guid>
      <dc:creator>sdolgin</dc:creator>
      <dc:date>2024-08-03T11:25:10Z</dc:date>
    </item>
  </channel>
</rss>

