<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>Azure Partners topics</title>
    <link>https://techcommunity.microsoft.com/t5/azure-partners/bd-p/AzurePartners</link>
    <description>Azure Partners topics</description>
    <pubDate>Thu, 30 Apr 2026 17:24:02 GMT</pubDate>
    <dc:creator>AzurePartners</dc:creator>
    <dc:date>2026-04-30T17:24:02Z</dc:date>
    <item>
      <title>MCA-E to CSP Direct move - as AE MSP how to do that?</title>
      <link>https://techcommunity.microsoft.com/t5/azure-partners/mca-e-to-csp-direct-move-as-ae-msp-how-to-do-that/m-p/4495921#M240</link>
      <description>undefined</description>
      <pubDate>Thu, 19 Feb 2026 16:13:29 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-partners/mca-e-to-csp-direct-move-as-ae-msp-how-to-do-that/m-p/4495921#M240</guid>
      <dc:creator>KartikShah</dc:creator>
      <dc:date>2026-02-19T16:13:29Z</dc:date>
    </item>
    <item>
      <title>ICYMI: 🚀 February Azure Update: What’s New for Partners</title>
      <link>https://techcommunity.microsoft.com/t5/azure-partners/icymi-february-azure-update-what-s-new-for-partners/m-p/4494240#M238</link>
      <description>&lt;P&gt;From expanded&amp;nbsp;&lt;STRONG&gt;Microsoft Fabric&lt;/STRONG&gt; capabilities and new &lt;STRONG&gt;secure migration incentives&lt;/STRONG&gt;, to major &lt;STRONG&gt;skilling opportunities&lt;/STRONG&gt; and can’t‑miss &lt;STRONG&gt;Azure events&lt;/STRONG&gt;, this month’s Azure update is packed with ways to help you migrate, modernize, and lead with AI.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Highlights include:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;New &lt;STRONG&gt;Defender for Cloud&lt;/STRONG&gt; migration investments and incentives&lt;/LI&gt;
&lt;LI&gt;Expanded &lt;STRONG&gt;Microsoft Fabric&lt;/STRONG&gt; features, including Fabric IQ and OneLake integrations&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;FY26 migration incentives&lt;/STRONG&gt; for software development companies&lt;/LI&gt;
&lt;LI&gt;Upcoming events like &lt;STRONG&gt;Azure Summit&lt;/STRONG&gt;, &lt;STRONG&gt;FABCON/SQLCON&lt;/STRONG&gt;, and &lt;STRONG&gt;Cosmos DB Conference&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;New &lt;STRONG&gt;go‑to‑market resources&lt;/STRONG&gt; to drive SMB and enterprise growth&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;👉 Dive in to explore the latest updates designed to help you build smarter solutions, secure workloads, and unlock new partner opportunities.&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/partnernews/february-update-what%E2%80%99s-new-in-azure-for-partners/4494212" target="_blank" rel="noopener" data-lia-auto-title="Read the full February Azure Newsletter here" data-lia-auto-title-active="0"&gt;Read the full February Azure Newsletter here&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 11 Feb 2026 00:32:01 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-partners/icymi-february-azure-update-what-s-new-for-partners/m-p/4494240#M238</guid>
      <dc:creator>JillArmourMicrosoft</dc:creator>
      <dc:date>2026-02-11T00:32:01Z</dc:date>
    </item>
    <item>
      <title>Azure monthly newsletters are now on the Partner News blog!</title>
      <link>https://techcommunity.microsoft.com/t5/azure-partners/azure-monthly-newsletters-are-now-on-the-partner-news-blog/m-p/4478565#M237</link>
      <description>&lt;P&gt;Go to our &lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/category/partnercommunity/blog/partnernews" target="_blank" rel="noopener" data-lia-auto-title="Partner news blog" data-lia-auto-title-active="0"&gt;Partner news blog&lt;/A&gt; and click the tag "&lt;A class="lia-internal-link" href="https://techcommunity.microsoft.com/tag/azure%20news?nodeId=board%3APartnerNews" target="_blank" rel="noopener" data-lia-auto-title="Azure News" data-lia-auto-title-active="0"&gt;Azure News&lt;/A&gt;" to catch up on all our past monthly Azure newsletters. You can click the &lt;STRONG&gt;follow&lt;/STRONG&gt; button in the top right corner to receive notifications of when the next newsletter is released. This ensures you will never miss out on updates again!&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Come on in and join the conversation!&lt;/P&gt;
&lt;P&gt;-jill&lt;/P&gt;</description>
      <pubDate>Wed, 17 Dec 2025 00:52:21 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-partners/azure-monthly-newsletters-are-now-on-the-partner-news-blog/m-p/4478565#M237</guid>
      <dc:creator>JillArmourMicrosoft</dc:creator>
      <dc:date>2025-12-17T00:52:21Z</dc:date>
    </item>
    <item>
      <title>Building Resilient Data Systems with Microsoft Fabric</title>
      <link>https://techcommunity.microsoft.com/t5/azure-partners/building-resilient-data-systems-with-microsoft-fabric/m-p/4410736#M234</link>
      <description>&lt;H2&gt;Introduction&lt;/H2&gt;
&lt;P&gt;Ensuring continuous availability and data integrity is paramount for organizations. This article focuses exclusively on resiliency within Microsoft Fabric, covering high availability (HA), disaster recovery (DR), and data protection strategies. We will explore Microsoft Fabric's resiliency features, including Recovery Point Objective (RPO) and Recovery Time Objective (RTO), and outline mechanisms for recovering from failures in both pipeline and streaming scenarios.&lt;/P&gt;
&lt;P&gt;As of April 25, 2025, this information reflects the current capabilities of Microsoft Fabric. Because features evolve rapidly, consult the &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/fabric/release-plan/" target="_blank" rel="noopener"&gt;Microsoft Fabric roadmap&lt;/A&gt; for the latest updates.&lt;/P&gt;
&lt;H2&gt;Service Resiliency in Microsoft Fabric&lt;/H2&gt;
&lt;P&gt;Microsoft Fabric leverages Azure’s infrastructure to ensure continuous service availability during hardware or software failures.&lt;/P&gt;
&lt;H3&gt;Availability Zones&lt;/H3&gt;
&lt;P&gt;Fabric uses Azure Availability Zones—physically separate datacenters within an Azure region—to automatically replicate resources across zones. This enables seamless failover during a zone outage, without manual intervention. As of Q1 2025, Fabric provides partial support for zone redundancy in &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/reliability/reliability-fabric#supported-regions" target="_blank" rel="noopener"&gt;selected regions&lt;/A&gt; and services. Customers should refer to service-specific documentation for detailed HA guarantees.&lt;/P&gt;
&lt;H3&gt;Cross‑Region Disaster Recovery&lt;/H3&gt;
&lt;P&gt;For protection against regional failures, Microsoft Fabric offers partial support for cross-region disaster recovery. The level of support varies by service:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;OneLake Data&lt;/STRONG&gt;: OneLake supports cross-region data replication in selected regions. Organizations can enable or disable this feature based on their business needs. For more information, see&amp;nbsp;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/fabric/onelake/onelake-disaster-recovery" target="_blank" rel="noopener"&gt;Disaster recovery and data protection for OneLake&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Power BI&lt;/STRONG&gt;: Power BI includes built-in DR capabilities, with automatic data replication across regions to ensure high availability. For frequently asked questions, review the &lt;A class="lia-external-url" href="https://learn.microsoft.com/power-bi/admin/service-security-high-availability-failover-disaster-recovery-faq" target="_blank" rel="noopener"&gt;Power BI high availability, failover, and disaster recovery FAQ&lt;/A&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Data Resiliency: RPO and RTO Considerations&lt;/H2&gt;
&lt;P&gt;Fabric offers configurable storage redundancy options—Locally Redundant Storage (LRS), Zone-Redundant Storage (ZRS), and Geo-Redundant Storage (GRS)—each with different RPO/RTO targets. Detailed definitions and SLAs are available in the &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-gb/azure/storage/common/storage-redundancy" target="_blank" rel="noopener"&gt;Azure Storage redundancy documentation&lt;/A&gt;.&lt;/P&gt;
&lt;H2&gt;Recovering from Failed Processes&lt;/H2&gt;
&lt;P&gt;Failures can occur in both pipeline and streaming workloads. Microsoft Fabric provides tools and strategies for minimizing disruption.&lt;/P&gt;
&lt;H3&gt;Data Pipelines&lt;/H3&gt;
&lt;P&gt;In Data Factory within Fabric, pipelines are made up of activities that may fail due to source issues or transient network errors. Zone failures are typically handled like standard pipeline errors, while regional failures require manual intervention. See &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/fabric/security/experience-specific-guidance#data-factory" target="_blank" rel="noopener"&gt;Microsoft Fabric disaster recovery experience specific guidance&lt;/A&gt; for a brief discussion.&lt;/P&gt;
&lt;P&gt;Pipeline resiliency can be improved by implementing retry policies, configuring error-handling blocks, and monitoring execution status using Fabric’s built-in logging features.&lt;/P&gt;
&lt;H3&gt;Streaming Scenarios&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Spark Structured Streaming&lt;/STRONG&gt;: Fabric leverages Apache Spark for real-time processing. Spark Structured Streaming includes built-in checkpointing, but seamless failover depends on cluster configuration. Manual intervention can be required to resume tasks after node or regional failures.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Eventstream&lt;/STRONG&gt;: Eventstream simplifies streaming data ingestion, but users should currently assume manual steps may be needed for fault recovery.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Monitoring and Alerting&lt;/H2&gt;
&lt;P&gt;Microsoft Fabric integrates with tools such as Azure Monitor and Microsoft Defender for Cloud, allowing administrators to track availability metrics and configure alerts. Regular monitoring helps detect anomalies early and ensures that resiliency strategies remain effective.&lt;/P&gt;
&lt;H2&gt;Data Loss Prevention (DLP)&lt;/H2&gt;
&lt;P&gt;As of March 2025, Microsoft Purview extends DLP policy enforcement to Fabric and Power BI workspaces. Organizations can define policies to automatically identify, monitor, and protect sensitive data across the Microsoft ecosystem. For more information, review&amp;nbsp;&lt;A class="lia-external-url" href="https://learn.microsoft.com/fabric/data-governance/data-loss-prevention-overview" target="_blank" rel="noopener"&gt;Purview Data Loss Prevention&lt;/A&gt;.&lt;/P&gt;
&lt;H2&gt;Cost Considerations&lt;/H2&gt;
&lt;P&gt;Enhancing resiliency can increase costs. Key considerations include:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Geo-Redundancy&lt;/STRONG&gt;: While cross-region replication improves resiliency, it also increases storage and transfer costs. Assess which workloads require GRS based on criticality.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Egress Charges&lt;/STRONG&gt;: Transferring data across regions can generate egress fees. Co-locating compute and storage within the same region helps minimize these charges.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Pipeline CU Consumption&lt;/STRONG&gt;: Data movement and orchestration in Fabric consume Capacity Units (CUs). Regional data movement may take longer and result in higher CU usage. &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/fabric/data-factory/pricing-pipelines#pricing-model" target="_blank" rel="noopener"&gt;Understanding these costs&lt;/A&gt; helps optimize both performance and budget. For example, data movement between regions can take more time and therefore add additional cost.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Enabling Disaster Recovery for Fabric Capacities&lt;/H2&gt;
&lt;P&gt;Disaster recovery must be enabled per Fabric capacity. This can be configured through the Admin Portal. Make sure to enable DR for each capacity that requires protection. For setup details, learn how to&amp;nbsp;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-gb/fabric/admin/capacity-settings?tabs=power-bi-premium#capacity-settings" target="_blank" rel="noopener"&gt;Manage your Fabric capacity&lt;/A&gt; for DR.&lt;/P&gt;
&lt;H2&gt;Conclusion&lt;/H2&gt;
&lt;P&gt;Microsoft Fabric offers a robust set of features for building resilient data systems. By leveraging its high availability, disaster recovery, and monitoring capabilities—and aligning them with cost-aware planning—organizations can ensure operational continuity and safeguard critical data.&lt;/P&gt;
&lt;P&gt;For ongoing updates, monitor the &lt;A class="lia-external-url" href="https://learn.microsoft.com/fabric" target="_blank" rel="noopener"&gt;Microsoft Fabric documentation&lt;/A&gt; and consider subscribing to the Fabric blog for the latest announcements.&lt;/P&gt;</description>
      <pubDate>Wed, 07 May 2025 09:24:24 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-partners/building-resilient-data-systems-with-microsoft-fabric/m-p/4410736#M234</guid>
      <dc:creator>AssafFraenkel</dc:creator>
      <dc:date>2025-05-07T09:24:24Z</dc:date>
    </item>
    <item>
      <title>Four Methods to Access Azure Key Vault from Azure Kubernetes Service (AKS)</title>
      <link>https://techcommunity.microsoft.com/t5/azure-partners/four-methods-to-access-azure-key-vault-from-azure-kubernetes/m-p/4376662#M228</link>
      <description>&lt;P&gt;In this article, we will explore various methods that an application hosted on Azure Kubernetes Service (AKS) can use to retrieve secrets from an&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/key-vault/general/basic-concepts" target="_blank" rel="noopener"&gt;Azure Key Vault&lt;/A&gt; resource. You can find all the scripts on &lt;A class="lia-external-url" href="https://github.com/paolosalvatori/aks-key-vault" target="_blank"&gt;GitHub&lt;/A&gt;.&lt;/P&gt;
&lt;H2&gt;Microsoft Entra Workload ID with Azure Kubernetes Service (AKS)&lt;/H2&gt;
&lt;P&gt;In order for workloads deployed on an&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/what-is-aks" target="_blank" rel="noopener"&gt;Azure Kubernetes Services (AKS)&lt;/A&gt;&amp;nbsp;cluster to access protected resources like&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/key-vault/general/basic-concepts" target="_blank" rel="noopener"&gt;Azure Key Vault&lt;/A&gt;&amp;nbsp;and Microsoft Graph, they need to have Microsoft Entra application credentials or managed identities.&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview" target="_blank" rel="noopener"&gt;Microsoft Entra Workload ID&lt;/A&gt;&amp;nbsp;integrates with Kubernetes to federate with external identity providers.&lt;/P&gt;
&lt;P&gt;To enable pods to have a Kubernetes identity,&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview" target="_blank" rel="noopener"&gt;Microsoft Entra Workload ID&lt;/A&gt;&amp;nbsp;utilizes Service Account Token Volume Projection. This means that a Kubernetes token is issued and OIDC federation enables Kubernetes applications to securely access Azure resources using Microsoft Entra ID, based on service account annotations.&lt;/P&gt;
&lt;P&gt;As shown in the following diagram, the Kubernetes cluster becomes a security token issuer, issuing tokens to Kubernetes Service Accounts. These tokens can be configured to be trusted on Microsoft Entra applications and user-defined managed identities. They can then be exchanged for an Microsoft Entra access token using the&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/dotnet/api/overview/azure/identity-readme" target="_blank" rel="noopener"&gt;Azure Identity SDKs&lt;/A&gt;&amp;nbsp;or the&amp;nbsp;&lt;A href="https://github.com/AzureAD/microsoft-authentication-library-for-dotnet" target="_blank" rel="noopener"&gt;Microsoft Authentication Library (MSAL)&lt;/A&gt;.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;In the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/entra/fundamentals/whatis" target="_blank" rel="noopener"&gt;Microsoft Entra ID&lt;/A&gt;&amp;nbsp;platform, there are two kinds of workload identities:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/entra/identity-platform/application-model" target="_blank" rel="noopener"&gt;Registered applications&lt;/A&gt;&amp;nbsp;have several powerful features, such as multi-tenancy and user sign-in. These capabilities cause application identities to be closely guarded by administrators. For more information on how to implement workload identity federation with registered applications, see&amp;nbsp;&lt;A href="https://techcommunity.microsoft.com/t5/fasttrack-for-azure/use-azure-ad-workload-identity-for-kubernetes-in-a-net-standard/ba-p/3576218" target="_blank" rel="noopener"&gt;Use Microsoft Entra Workload Identity for Kubernetes with a User-Assigned Managed Identity&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview" target="_blank" rel="noopener"&gt;Managed identities&lt;/A&gt;&amp;nbsp;provide an automatically managed identity in Microsoft Entra ID for applications to use when connecting to resources that support Microsoft Entra ID authentication. Applications can use managed identities to obtain Microsoft Entra tokens without having to manage any credentials. Managed identities were built with developer scenarios in mind. They support only the Client Credentials flow meant for software workloads to identify themselves when accessing other resources. For more information on how to implement workload identity federation with registered applications, see&amp;nbsp;&lt;A href="https://techcommunity.microsoft.com/blog/fasttrackforazureblog/use-azure-ad-workload-identity-for-kubernetes-with-a-user-assigned-managed-ident/3654928#M270" target="_blank" rel="noopener"&gt;Use Azure AD Workload Identity for Kubernetes with a User-Assigned Managed Identity&lt;/A&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Advantages&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Transparently assigns a user-defined managed identity to a pod or deployment.&lt;/LI&gt;
&lt;LI&gt;Allows using Microsoft Entra integrated security and Azure RBAC for authorization.&lt;/LI&gt;
&lt;LI&gt;Provides secure access to Azure Key Vault and other managed services.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Disadvantages&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Requires using Azure libraries for acquiring Azure credentials and using them to access managed services.&lt;/LI&gt;
&lt;LI&gt;Requires code changes.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Resources&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview" target="_blank" rel="noopener"&gt;Use Microsoft Entra Workload ID with Azure Kubernetes Service (AKS)&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/workload-identity-deploy-cluster" target="_blank" rel="noopener"&gt;Deploy and Configure an AKS Cluster with Workload Identity&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/workload-identity-cross-tenant" target="_blank" rel="noopener"&gt;Configure Cross-Tenant Workload Identity on AKS&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://github.com/paolosalvatori/azure-ad-workload-identity-mi" target="_blank" rel="noopener"&gt;Use Microsoft Entra Workload ID with a User-Assigned Managed Identity in an AKS-hosted .NET Application&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Azure Key Vault Provider for Secrets Store CSI Driver in AKS&lt;/H2&gt;
&lt;P&gt;The&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver" target="_blank" rel="noopener"&gt;Azure Key Vault provider for Secrets Store CSI Driver&lt;/A&gt;&amp;nbsp;enables retrieving secrets, keys, and certificates stored in Azure Key Vault and accessing them as files from mounted volumes in an AKS cluster. This method eliminates the need for Azure-specific libraries to access the secrets.&lt;/P&gt;
&lt;P&gt;This&amp;nbsp;&lt;A href="https://github.com/Azure/secrets-store-csi-driver-provider-azure" target="_blank" rel="noopener"&gt;Secret Store CSI Driver for Key Vault&lt;/A&gt;&amp;nbsp;offers the following features:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Mounts secrets, keys, and certificates to a pod using a CSI volume.&lt;/LI&gt;
&lt;LI&gt;Supports CSI inline volumes.&lt;/LI&gt;
&lt;LI&gt;Allows the mounting of multiple secrets store objects as a single volume.&lt;/LI&gt;
&lt;LI&gt;Offers pod portability with the SecretProviderClass CRD.&lt;/LI&gt;
&lt;LI&gt;Compatible with Windows containers.&lt;/LI&gt;
&lt;LI&gt;Keeps in sync with Kubernetes secrets.&lt;/LI&gt;
&lt;LI&gt;Supports auto-rotation of mounted contents and synced Kubernetes secrets.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;When auto-rotation is enabled for the Azure Key Vault Secrets Provider, it automatically updates both the pod mount and the corresponding Kubernetes secret defined in the&amp;nbsp;&lt;STRONG&gt;secretObjects&lt;/STRONG&gt;&amp;nbsp;field of SecretProviderClass. It continuously polls for changes based on the rotation poll interval (default is two minutes).&lt;/P&gt;
&lt;P&gt;If a secret in an external secrets store is updated after the initial deployment of the pod, both the Kubernetes Secret and the pod mount will periodically update, depending on how the application consumes the secret data. Here are the recommended approaches for different scenarios:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Mount the Kubernetes Secret as a volume: Utilize the auto-rotation and sync K8s secrets features of Secrets Store CSI Driver. The application should monitor changes from the mounted Kubernetes Secret volume. When the CSI Driver updates the Kubernetes Secret, the volume contents will be automatically updated.&lt;/LI&gt;
&lt;LI&gt;Application reads data from the container filesystem: Take advantage of the rotation feature of Secrets Store CSI Driver. The application should monitor file changes from the volume mounted by the CSI driver.&lt;/LI&gt;
&lt;LI&gt;Use the Kubernetes Secret for an environment variable: Restart the pod to acquire the latest secret as an environment variable. You can use tools like Reloader to watch for changes on the synced Kubernetes Secret and perform rolling upgrades on pods.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H3&gt;Advantages&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Secrets, keys, and certificates can be accessed as files from mounted volumes.&lt;/LI&gt;
&lt;LI&gt;Optionally, Kubernetes secrets can be created to store keys, secrets, and certificates from Key Vault.&lt;/LI&gt;
&lt;LI&gt;No need for Azure-specific libraries to access secrets.&lt;/LI&gt;
&lt;LI&gt;Simplifies secret management with transparent integration.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Disadvantages&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Still requires accessing managed services such as Azure Service Bus or Azure Storage using their own connection strings from Azure Key Vault.&lt;/LI&gt;
&lt;LI&gt;Cannot utilize Microsoft Entra ID integrated security and managed identities for accessing managed services.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Resources&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver" target="_blank" rel="noopener"&gt;Using the Azure Key Vault Provider for Secrets Store CSI Driver in AKS&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-identity-access?tabs=azure-portal&amp;amp;pivots=access-with-service-connector" target="_blank" rel="noopener"&gt;Access Azure Key Vault with the CSI Driver Identity Provider&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-configuration-options" target="_blank" rel="noopener"&gt;Configuration and Troubleshooting Options for Azure Key Vault Provider in AKS&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://github.com/Azure/secrets-store-csi-driver-provider-azure" target="_blank" rel="noopener"&gt;Azure Key Vault Provider for Secrets Store CSI Driver&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Dapr Secret Store for Key Vault&lt;/H2&gt;
&lt;P&gt;&lt;A href="https://docs.dapr.io/concepts/overview/" target="_blank" rel="noopener"&gt;Dapr (Distributed Application Runtime)&lt;/A&gt;&amp;nbsp;is a versatile and event-driven runtime that simplifies the development of resilient, stateless, and stateful applications for both cloud and edge environments. It embraces the diversity of programming languages and developer frameworks, providing a seamless experience regardless of your preferences. Dapr encapsulates the best practices for building microservices into a set of open and independent APIs known as building blocks. These building blocks offer the following capabilities:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Enable developers to build portable applications using their preferred language and framework.&lt;/LI&gt;
&lt;LI&gt;Are completely independent from each other, allowing flexibility and freedom of choice.&lt;/LI&gt;
&lt;LI&gt;Have no limits on how many building blocks can be used within an application.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Dapr offers a built-in&amp;nbsp;&lt;A href="https://docs.dapr.io/developing-applications/building-blocks/secrets/secrets-overview/" target="_blank" rel="noopener"&gt;secrets building block&lt;/A&gt;&amp;nbsp;that makes it easier for developers to consume application secrets from a secret store such as Azure Key Vault, AWS Secret Manager, and Google Key Management, and Hashicorp Vault.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;You can follow these steps to use Dapr's secret store building block:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Deploy the Dapr extension to your AKS cluster.&lt;/LI&gt;
&lt;LI&gt;Set up a component for a specific secret store solution.&lt;/LI&gt;
&lt;LI&gt;Retrieve secrets using the Dapr secrets API in your application code.&lt;/LI&gt;
&lt;LI&gt;Optionally, reference secrets in Dapr component files.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;You can watch&amp;nbsp;&lt;A href="https://www.youtube.com/live/0y7ne6teHT4?si=3bmNSSyIEIVSF-Ej&amp;amp;t=9931" target="_blank" rel="noopener"&gt;this overview video and demo&lt;/A&gt;&amp;nbsp;to see how Dapr secrets management works.&lt;/P&gt;
&lt;P&gt;The secrets management API building block offers several features for your application.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Configure secrets without changing application code&lt;/STRONG&gt;: You can call the secrets API in your application code to retrieve and use secrets from Dapr-supported secret stores. Watch&amp;nbsp;&lt;A href="https://www.youtube.com/watch?v=OtbYCBt9C34&amp;amp;t=1818" target="_blank" rel="noopener"&gt;this video&lt;/A&gt;&amp;nbsp;for an example of how the secrets management API can be used in your application.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Reference secret stores in Dapr components&lt;/STRONG&gt;: When configuring Dapr components like state stores, you often need to include credentials in component files. Alternatively, you can place the credentials within a Dapr-supported secret store and reference the secret within the Dapr component. This approach is recommended, especially in production environments. Read more about&amp;nbsp;&lt;A href="https://docs.dapr.io/operations/components/component-secrets/" target="_blank" rel="noopener"&gt;referencing secret stores in components&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Limit access to secrets&lt;/STRONG&gt;: Dapr provides the ability to define scopes and restrict access permissions to provide more granular control over access to secrets. Learn more about&amp;nbsp;&lt;A href="https://docs.dapr.io/developing-applications/building-blocks/secrets/secrets-scopes/" target="_blank" rel="noopener"&gt;using secret scoping&lt;/A&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Advantages&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Allows applications to retrieve secrets from various secret stores, including Azure Key Vault.&lt;/LI&gt;
&lt;LI&gt;Simplifies secret management with Dapr's consistent API.&lt;/LI&gt;
&lt;LI&gt;Supports Azure Key Vault integration with managed identities.&lt;/LI&gt;
&lt;LI&gt;Supports third-party secret stores, such as Azure Key Vault, AWS Secret Manager, and Google Key Management, and Hashicorp Vault.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Disadvantages&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Requires injecting a sidecar container for Dapr into the pod, which may not be suitable for all scenarios.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Resources&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.dapr.io/developing-applications/building-blocks/secrets/secrets-overview/" target="_blank" rel="noopener"&gt;Dapr Secrets Overview&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.dapr.io/reference/components-reference/supported-secret-stores/azure-keyvault/" target="_blank" rel="noopener"&gt;Azure Key Vault Secret Store in Dapr&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.dapr.io/getting-started/quickstarts/secrets-quickstart/" target="_blank" rel="noopener"&gt;Secrets management quickstart&lt;/A&gt;: Retrieve secrets in the application code from a configured secret store using the secrets management API.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://github.com/dapr/quickstarts/tree/master/tutorials/secretstore" target="_blank" rel="noopener"&gt;Secret Store tutorial&lt;/A&gt;: Learn how to use the Dapr Secrets API to access secret stores.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.dapr.io/developing-applications/integrations/azure/azure-authentication/authenticating-azure/" target="_blank" rel="noopener"&gt;Authenticating to Azure for Dapr&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.dapr.io/developing-applications/integrations/azure/azure-authentication/howto-mi/" target="_blank" rel="noopener"&gt;How-to Guide for Managed Identities with Dapr&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;External Secrets Operator with Azure Key Vault&lt;/H2&gt;
&lt;P&gt;The&amp;nbsp;&lt;A href="https://external-secrets.io/latest/" target="_blank" rel="noopener"&gt;External Secrets Operator&lt;/A&gt;&amp;nbsp;is a Kubernetes operator that enables managing secrets stored in external secret stores, such as Azure Key Vault, AWS Secret Manager, and Google Key Management, and Hashicorp Vault.. It leverages the Azure Key Vault provider to synchronize secrets into Kubernetes secrets for easy consumption by applications. External Secrets Operator integrates with&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/services/key-vault/" target="_blank" rel="noopener"&gt;Azure Key vault&lt;/A&gt;&amp;nbsp;for secrets, certificates and Keys management.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;You can configure the&amp;nbsp;&lt;A href="https://external-secrets.io/latest/" target="_blank" rel="noopener"&gt;External Secrets Operator&lt;/A&gt;&amp;nbsp;to use&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview" target="_blank" rel="noopener"&gt;Microsoft Entra Workload ID&lt;/A&gt;&amp;nbsp;to access an&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/key-vault/general/basic-concepts" target="_blank" rel="noopener"&gt;Azure Key Vault&lt;/A&gt;&amp;nbsp;resource.&lt;/P&gt;
&lt;H3&gt;Advantages&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Manages secrets stored in external secret stores like Azure Key Vault, AWS Secret Manager, and Google Key Management, Hashicorp Vault, and more.&lt;/LI&gt;
&lt;LI&gt;Provides synchronization of Key Vault secrets into Kubernetes secrets.&lt;/LI&gt;
&lt;LI&gt;Simplifies secret management with Kubernetes-native integration.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Disadvantages&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Requires setting up and managing the External Secrets Operator.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Resources&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://external-secrets.io/latest/" target="_blank" rel="noopener"&gt;External Secrets Operator&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://external-secrets.io/latest/provider/azure-key-vault/" target="_blank" rel="noopener"&gt;Azure Key Vault Provider for External Secrets Operator&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Hands On Labs&lt;/H2&gt;
&lt;P&gt;You are now ready to see each technique in action.&lt;/P&gt;
&lt;H3&gt;Configure Variables&lt;/H3&gt;
&lt;P&gt;The first step is setting up the name for a new or existing AKS cluster and Azure Key Vault resource in the&amp;nbsp;scripts/00-variables.sh&amp;nbsp;file, which is included and used by all the scripts in this sample.&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;# Azure Kubernetes Service (AKS)
AKS_NAME="&amp;lt;AKS-Cluster-Name&amp;gt;"
AKS_RESOURCE_GROUP_NAME="&amp;lt;AKS-Resource-Group-Name&amp;gt;"

# Azure Key Vault
KEY_VAULT_NAME="&amp;lt;Key-Vault-name&amp;gt;"
KEY_VAULT_RESOURCE_GROUP_NAME="&amp;lt;Key-Vault-Resource-Group-Name&amp;gt;"
KEY_VAULT_SKU="Standard"
LOCATION="EastUS" # Choose a location

# Secrets and Values 
SECRETS=("username" "password")
VALUES=("admin" "trustno1!")

# Azure Subscription and Tenant
TENANT_ID=$(az account show --query tenantId --output tsv)
SUBSCRIPTION_NAME=$(az account show --query name --output tsv)
SUBSCRIPTION_ID=$(az account show --query id --output tsv)&lt;/LI-CODE&gt;
&lt;P&gt;The&amp;nbsp;SECRETS&amp;nbsp;array variable contains a list of secrets to create in the Azure Key Vault resource, while the&amp;nbsp;VALUES&amp;nbsp;array contains their values.&lt;/P&gt;
&lt;H3&gt;Create or Update AKS Cluster&lt;/H3&gt;
&lt;P&gt;You can use the following Bash script to create a new AKS cluster with the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-create" target="_blank" rel="noopener"&gt;az aks create&lt;/A&gt;&amp;nbsp;command. This script includes the&amp;nbsp;--enable-oidc-issuer&amp;nbsp;parameter to enable the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/use-oidc-issuer" target="_blank" rel="noopener"&gt;OpenID Connect (OIDC) issuer&lt;/A&gt;&amp;nbsp;and the&amp;nbsp;--enable-workload-identity&amp;nbsp;parameter to enable&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview" target="_blank" rel="noopener"&gt;Microsoft Entra Workload ID&lt;/A&gt;. If the AKS cluster already exists, the script updates it to use the OIDC issuer and enable workload identity by calling the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-update" target="_blank" rel="noopener"&gt;az aks update&lt;/A&gt;&amp;nbsp;command with the same parameters.&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;#!/bin/Bash

# Variables
source ../00-variables.sh

# Check if the resource group already exists
echo "Checking if [$AKS_RESOURCE_GROUP_NAME] resource group actually exists in the [$SUBSCRIPTION_NAME] subscription..."

az group show --name $AKS_RESOURCE_GROUP_NAME &amp;amp;&amp;gt;/dev/null

if [[ $? != 0 ]]; then
  echo "No [$AKS_RESOURCE_GROUP_NAME] resource group actually exists in the [$SUBSCRIPTION_NAME] subscription"
  echo "Creating [$AKS_RESOURCE_GROUP_NAME] resource group in the [$SUBSCRIPTION_NAME] subscription..."

  # create the resource group
  az group create --name $AKS_RESOURCE_GROUP_NAME --location $LOCATION 1&amp;gt;/dev/null

  if [[ $? == 0 ]]; then
    echo "[$AKS_RESOURCE_GROUP_NAME] resource group successfully created in the [$SUBSCRIPTION_NAME] subscription"
  else
    echo "Failed to create [$AKS_RESOURCE_GROUP_NAME] resource group in the [$SUBSCRIPTION_NAME] subscription"
    exit
  fi
else
  echo "[$AKS_RESOURCE_GROUP_NAME] resource group already exists in the [$SUBSCRIPTION_NAME] subscription"
fi

# Check if the AKS cluster already exists
echo "Checking if [$AKS_NAME] AKS cluster actually exists in the [$AKS_RESOURCE_GROUP_NAME] resource group..."
az aks show \
  --name $AKS_NAME \
  --resource-group $AKS_RESOURCE_GROUP_NAME \
  --only-show-errors &amp;amp;&amp;gt;/dev/null

if [[ $? != 0 ]]; then
  echo "No [$AKS_NAME] AKS cluster actually exists in the [$AKS_RESOURCE_GROUP_NAME] resource group"
  echo "Creating [$AKS_NAME] AKS cluster in the [$AKS_RESOURCE_GROUP_NAME] resource group..."

  # create the AKS cluster
  az aks create \
    --name $AKS_NAME \
    --resource-group $AKS_RESOURCE_GROUP_NAME \
    --location $LOCATION \
    --enable-oidc-issuer \
    --enable-workload-identity \
    --generate-ssh-keys \
    --only-show-errors &amp;amp;&amp;gt;/dev/null

  if [[ $? == 0 ]]; then
    echo "[$AKS_NAME] AKS cluster successfully created in the [$AKS_RESOURCE_GROUP_NAME] resource group"
  else
    echo "Failed to create [$AKS_NAME] AKS cluster in the [$AKS_RESOURCE_GROUP_NAME] resource group"
    exit
  fi
else
  echo "[$AKS_NAME] AKS cluster already exists in the [$AKS_RESOURCE_GROUP_NAME] resource group"
  
  # Check if the OIDC issuer is enabled in the AKS cluster
  echo "Checking if the OIDC issuer is enabled in the [$AKS_NAME] AKS cluster..."
  oidcEnabled=$(az aks show \
    --name $AKS_NAME \
    --resource-group $AKS_RESOURCE_GROUP_NAME \
    --only-show-errors \
    --query oidcIssuerProfile.enabled \
    --output tsv)

  if [[ $oidcEnabled == "true" ]]; then
    echo "The OIDC issuer is already enabled in the [$AKS_NAME] AKS cluster"
  else
    echo "The OIDC issuer is not enabled in the [$AKS_NAME] AKS cluster"
  fi

  # Check if Workload Identity is enabled in the AKS cluster
  echo "Checking if Workload Identity is enabled in the [$AKS_NAME] AKS cluster..."
  workloadIdentityEnabled=$(az aks show \
    --name $AKS_NAME \
    --resource-group $AKS_RESOURCE_GROUP_NAME \
    --only-show-errors \
    --query securityProfile.workloadIdentity.enabled \
    --output tsv)

  if [[ $workloadIdentityEnabled == "true" ]]; then
    echo "Workload Identity is already enabled in the [$AKS_NAME] AKS cluster"
  else
    echo "Workload Identity is not enabled in the [$AKS_NAME] AKS cluster"
  fi

  # Enable OIDC issuer and Workload Identity
  if [[ $oidcEnabled == "true" &amp;amp;&amp;amp; $workloadIdentityEnabled == "true" ]]; then
    echo "OIDC issuer and Workload Identity are already enabled in the [$AKS_NAME] AKS cluster"
    exit
  fi

  echo "Enabling OIDC issuer and Workload Identity in the [$AKS_NAME] AKS cluster..."
  az aks update \
    --name $AKS_NAME \
    --resource-group $AKS_RESOURCE_GROUP_NAME \
    --enable-oidc-issuer \
    --enable-workload-identity \
    --only-show-errors

  if [[ $? == 0 ]]; then
    echo "OIDC issuer and Workload Identity successfully enabled in the [$AKS_NAME] AKS cluster"
  else
    echo "Failed to enable OIDC issuer and Workload Identity in the [$AKS_NAME] AKS cluster"
    exit
  fi
fi&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Create or Update Key Vault&lt;/H3&gt;
&lt;P&gt;You can use the following Bash script to create a new&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/key-vault/general/basic-concepts" target="_blank" rel="noopener"&gt;Azure Key Vault&lt;/A&gt;&amp;nbsp;if it doesn't already exist, and create a couple of secrets for demonstration purposes.&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;#!/bin/Bash

# Variables
source ../00-variables.sh

# Check if the resource group already exists
echo "Checking if [$KEY_VAULT_RESOURCE_GROUP_NAME] resource group actually exists in the [$SUBSCRIPTION_NAME] subscription..."

az group show --name $KEY_VAULT_RESOURCE_GROUP_NAME &amp;amp;&amp;gt;/dev/null

if [[ $? != 0 ]]; then
  echo "No [$KEY_VAULT_RESOURCE_GROUP_NAME] resource group actually exists in the [$SUBSCRIPTION_NAME] subscription"
  echo "Creating [$KEY_VAULT_RESOURCE_GROUP_NAME] resource group in the [$SUBSCRIPTION_NAME] subscription..."

  # create the resource group
  az group create --name $KEY_VAULT_RESOURCE_GROUP_NAME --location $LOCATION 1&amp;gt;/dev/null

  if [[ $? == 0 ]]; then
    echo "[$KEY_VAULT_RESOURCE_GROUP_NAME] resource group successfully created in the [$SUBSCRIPTION_NAME] subscription"
  else
    echo "Failed to create [$KEY_VAULT_RESOURCE_GROUP_NAME] resource group in the [$SUBSCRIPTION_NAME] subscription"
    exit
  fi
else
  echo "[$KEY_VAULT_RESOURCE_GROUP_NAME] resource group already exists in the [$SUBSCRIPTION_NAME] subscription"
fi

# Check if the key vault already exists
echo "Checking if [$KEY_VAULT_NAME] key vault actually exists in the [$SUBSCRIPTION_NAME] subscription..."

az keyvault show --name $KEY_VAULT_NAME --resource-group $KEY_VAULT_RESOURCE_GROUP_NAME &amp;amp;&amp;gt;/dev/null

if [[ $? != 0 ]]; then
  echo "No [$KEY_VAULT_NAME] key vault actually exists in the [$SUBSCRIPTION_NAME] subscription"
  echo "Creating [$KEY_VAULT_NAME] key vault in the [$SUBSCRIPTION_NAME] subscription..."

  # create the key vault
  az keyvault create \
    --name $KEY_VAULT_NAME \
    --resource-group $KEY_VAULT_RESOURCE_GROUP_NAME \
    --location $LOCATION \
    --enabled-for-deployment \
    --enabled-for-disk-encryption \
    --enabled-for-template-deployment \
    --sku $KEY_VAULT_SKU 1&amp;gt;/dev/null

  if [[ $? == 0 ]]; then
    echo "[$KEY_VAULT_NAME] key vault successfully created in the [$SUBSCRIPTION_NAME] subscription"
  else
    echo "Failed to create [$KEY_VAULT_NAME] key vault in the [$SUBSCRIPTION_NAME] subscription"
    exit
  fi
else
  echo "[$KEY_VAULT_NAME] key vault already exists in the [$SUBSCRIPTION_NAME] subscription"
fi

# Create secrets
for INDEX in ${!SECRETS[@]}; do
  # Check if the secret already exists
  echo "Checking if [${SECRETS[$INDEX]}] secret actually exists in the [$KEY_VAULT_NAME] key vault..."

  az keyvault secret show --name ${SECRETS[$INDEX]} --vault-name $KEY_VAULT_NAME &amp;amp;&amp;gt;/dev/null

  if [[ $? != 0 ]]; then
    echo "No [${SECRETS[$INDEX]}] secret actually exists in the [$KEY_VAULT_NAME] key vault"
    echo "Creating [${SECRETS[$INDEX]}] secret in the [$KEY_VAULT_NAME] key vault..."

    # create the secret
    az keyvault secret set \
      --name ${SECRETS[$INDEX]} \
      --vault-name $KEY_VAULT_NAME \
      --value ${VALUES[$INDEX]} 1&amp;gt;/dev/null

    if [[ $? == 0 ]]; then
      echo "[${SECRETS[$INDEX]}] secret successfully created in the [$KEY_VAULT_NAME] key vault"
    else
      echo "Failed to create [${SECRETS[$INDEX]}] secret in the [$KEY_VAULT_NAME] key vault"
      exit
    fi
  else
    echo "[${SECRETS[$INDEX]}] secret already exists in the [$KEY_VAULT_NAME] key vault"
  fi
done&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Create Managed Identity and Federated Identity Credential&lt;/H3&gt;
&lt;P&gt;All the techniques use&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview" target="_blank" rel="noopener"&gt;Microsoft Entra Workload ID&lt;/A&gt;. The repository contains a folder for each technique. Each folder includes the following&amp;nbsp;create-managed-identity.sh&amp;nbsp;Bash script:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;#/bin/bash

# Variables
source ../00-variables.sh
source ./00-variables.sh

# Check if the resource group already exists
echo "Checking if [$AKS_RESOURCE_GROUP_NAME] resource group actually exists in the [$SUBSCRIPTION_ID] subscription..."

az group show --name $AKS_RESOURCE_GROUP_NAME &amp;amp;&amp;gt;/dev/null

if [[ $? != 0 ]]; then
  echo "No [$AKS_RESOURCE_GROUP_NAME] resource group actually exists in the [$SUBSCRIPTION_ID] subscription"
  echo "Creating [$AKS_RESOURCE_GROUP_NAME] resource group in the [$SUBSCRIPTION_ID] subscription..."

  # create the resource group
  az group create \
    --name $AKS_RESOURCE_GROUP_NAME \
    --location $LOCATION 1&amp;gt;/dev/null

  if [[ $? == 0 ]]; then
    echo "[$AKS_RESOURCE_GROUP_NAME] resource group successfully created in the [$SUBSCRIPTION_ID] subscription"
  else
    echo "Failed to create [$AKS_RESOURCE_GROUP_NAME] resource group in the [$SUBSCRIPTION_ID] subscription"
    exit
  fi
else
  echo "[$AKS_RESOURCE_GROUP_NAME] resource group already exists in the [$SUBSCRIPTION_ID] subscription"
fi

# check if the managed identity already exists
echo "Checking if [$MANAGED_IDENTITY_NAME] managed identity actually exists in the [$AKS_RESOURCE_GROUP_NAME] resource group..."

az identity show \
  --name $MANAGED_IDENTITY_NAME \
  --resource-group $AKS_RESOURCE_GROUP_NAME &amp;amp;&amp;gt;/dev/null

if [[ $? != 0 ]]; then
  echo "No [$MANAGED_IDENTITY_NAME] managed identity actually exists in the [$AKS_RESOURCE_GROUP_NAME] resource group"
  echo "Creating [$MANAGED_IDENTITY_NAME] managed identity in the [$AKS_RESOURCE_GROUP_NAME] resource group..."

  # create the managed identity
  az identity create \
    --name $MANAGED_IDENTITY_NAME \
    --resource-group $AKS_RESOURCE_GROUP_NAME &amp;amp;&amp;gt;/dev/null

  if [[ $? == 0 ]]; then
    echo "[$MANAGED_IDENTITY_NAME] managed identity successfully created in the [$AKS_RESOURCE_GROUP_NAME] resource group"
  else
    echo "Failed to create [$MANAGED_IDENTITY_NAME] managed identity in the [$AKS_RESOURCE_GROUP_NAME] resource group"
    exit
  fi
else
  echo "[$MANAGED_IDENTITY_NAME] managed identity already exists in the [$AKS_RESOURCE_GROUP_NAME] resource group"
fi

# Get the managed identity principal id
echo "Retrieving principalId for [$MANAGED_IDENTITY_NAME] managed identity..."
PRINCIPAL_ID=$(az identity show \
  --name $MANAGED_IDENTITY_NAME \
  --resource-group $AKS_RESOURCE_GROUP_NAME \
  --query principalId \
  --output tsv)

if [[ -n $PRINCIPAL_ID ]]; then
  echo "[$PRINCIPAL_ID] principalId  or the [$MANAGED_IDENTITY_NAME] managed identity successfully retrieved"
else
  echo "Failed to retrieve principalId for the [$MANAGED_IDENTITY_NAME] managed identity"
  exit
fi

# Get the managed identity client id
echo "Retrieving clientId for [$MANAGED_IDENTITY_NAME] managed identity..."
CLIENT_ID=$(az identity show \
  --name $MANAGED_IDENTITY_NAME \
  --resource-group $AKS_RESOURCE_GROUP_NAME \
  --query clientId \
  --output tsv)

if [[ -n $CLIENT_ID ]]; then
  echo "[$CLIENT_ID] clientId  for the [$MANAGED_IDENTITY_NAME] managed identity successfully retrieved"
else
  echo "Failed to retrieve clientId for the [$MANAGED_IDENTITY_NAME] managed identity"
  exit
fi

# Retrieve the resource id of the Key Vault resource
echo "Retrieving the resource id for the [$KEY_VAULT_NAME] key vault..."
KEY_VAULT_ID=$(az keyvault show \
  --name $KEY_VAULT_NAME \
  --resource-group $KEY_VAULT_RESOURCE_GROUP_NAME \
  --query id \
  --output tsv)

if [[ -n $KEY_VAULT_ID ]]; then
  echo "[$KEY_VAULT_ID] resource id for the [$KEY_VAULT_NAME] key vault successfully retrieved"
else
  echo "Failed to retrieve the resource id for the [$KEY_VAULT_NAME] key vault"
  exit
fi

# Assign the Key Vault Secrets User role to the managed identity with Key Vault as a scope
ROLE="Key Vault Secrets User"
echo "Checking if [$ROLE] role with [$KEY_VAULT_NAME] key vault as a scope is already assigned to the [$MANAGED_IDENTITY_NAME] managed identity..."
CURRENT_ROLE=$(az role assignment list \
  --assignee $PRINCIPAL_ID \
  --scope $KEY_VAULT_ID \
  --query "[?roleDefinitionName=='$ROLE'].roleDefinitionName" \
  --output tsv 2&amp;gt;/dev/null)

if [[ $CURRENT_ROLE == $ROLE ]]; then
  echo "[$ROLE] role with [$KEY_VAULT_NAME] key vault as a scope is already assigned to the [$MANAGED_IDENTITY_NAME] managed identity"
else
  echo "[$ROLE] role with [$KEY_VAULT_NAME] key vault as a scope is not assigned to the [$MANAGED_IDENTITY_NAME] managed identity"
  echo "Assigning the [$ROLE] role with [$KEY_VAULT_NAME] key vault as a scope to the [$MANAGED_IDENTITY_NAME] managed identity..."

  for i in {1..10}; do
    az role assignment create \
      --assignee $PRINCIPAL_ID \
      --role "$ROLE" \
      --scope $KEY_VAULT_ID 1&amp;gt;/dev/null

    if [[ $? == 0 ]]; then
      echo "Successfully assigned the [$ROLE] role with [$KEY_VAULT_NAME] key vault as a scope to the [$MANAGED_IDENTITY_NAME] managed identity"
      break
    else
      echo "Failed to assign the [$ROLE] role with [$KEY_VAULT_NAME] key vault as a scope to the [$MANAGED_IDENTITY_NAME] managed identity, retrying in 5 seconds..."
      sleep 5
    fi

    if [[ $i == 3 ]]; then
      echo "Failed to assign the [$ROLE] role with [$KEY_VAULT_NAME] key vault as a scope to the [$MANAGED_IDENTITY_NAME] managed identity after 3 attempts"
      exit
    fi
  done
fi

# Check if the namespace exists in the cluster
RESULT=$(kubectl get namespace -o 'jsonpath={.items[?(@.metadata.name=="'$NAMESPACE'")].metadata.name'})

if [[ -n $RESULT ]]; then
  echo "[$NAMESPACE] namespace already exists in the cluster"
else
  echo "[$NAMESPACE] namespace does not exist in the cluster"
  echo "Creating [$NAMESPACE] namespace in the cluster..."
  kubectl create namespace $NAMESPACE
fi

# Check if the service account already exists
RESULT=$(kubectl get sa -n $NAMESPACE -o 'jsonpath={.items[?(@.metadata.name=="'$SERVICE_ACCOUNT_NAME'")].metadata.name'})

if [[ -n $RESULT ]]; then
  echo "[$SERVICE_ACCOUNT_NAME] service account already exists"
else
  # Create the service account
  echo "[$SERVICE_ACCOUNT_NAME] service account does not exist"
  echo "Creating [$SERVICE_ACCOUNT_NAME] service account..."
  cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  annotations:
    azure.workload.identity/client-id: $CLIENT_ID
    azure.workload.identity/tenant-id: $TENANT_ID
  labels:
    azure.workload.identity/use: "true"
  name: $SERVICE_ACCOUNT_NAME
  namespace: $NAMESPACE
EOF
fi

# Show service account YAML manifest
echo "Service Account YAML manifest"
echo "-----------------------------"
kubectl get sa $SERVICE_ACCOUNT_NAME -n $NAMESPACE -o yaml

# Check if the federated identity credential already exists
echo "Checking if [$FEDERATED_IDENTITY_NAME] federated identity credential actually exists in the [$AKS_RESOURCE_GROUP_NAME] resource group..."

az identity federated-credential show \
  --name $FEDERATED_IDENTITY_NAME \
  --resource-group $AKS_RESOURCE_GROUP_NAME \
  --identity-name $MANAGED_IDENTITY_NAME &amp;amp;&amp;gt;/dev/null

if [[ $? != 0 ]]; then
  echo "No [$FEDERATED_IDENTITY_NAME] federated identity credential actually exists in the [$AKS_RESOURCE_GROUP_NAME] resource group"

  # Get the OIDC Issuer URL
  AKS_OIDC_ISSUER_URL="$(az aks show \
    --only-show-errors \
    --name $AKS_NAME \
    --resource-group $AKS_RESOURCE_GROUP_NAME \
    --query oidcIssuerProfile.issuerUrl \
    --output tsv)"

  # Show OIDC Issuer URL
  if [[ -n $AKS_OIDC_ISSUER_URL ]]; then
    echo "The OIDC Issuer URL of the [$AKS_NAME] cluster is [$AKS_OIDC_ISSUER_URL]"
  fi

  echo "Creating [$FEDERATED_IDENTITY_NAME] federated identity credential in the [$AKS_RESOURCE_GROUP_NAME] resource group..."

  # Establish the federated identity credential between the managed identity, the service account issuer, and the subject.
  az identity federated-credential create \
    --name $FEDERATED_IDENTITY_NAME \
    --identity-name $MANAGED_IDENTITY_NAME \
    --resource-group $AKS_RESOURCE_GROUP_NAME \
    --issuer $AKS_OIDC_ISSUER_URL \
    --subject system:serviceaccount:$NAMESPACE:$SERVICE_ACCOUNT_NAME

  if [[ $? == 0 ]]; then
    echo "[$FEDERATED_IDENTITY_NAME] federated identity credential successfully created in the [$AKS_RESOURCE_GROUP_NAME] resource group"
  else
    echo "Failed to create [$FEDERATED_IDENTITY_NAME] federated identity credential in the [$AKS_RESOURCE_GROUP_NAME] resource group"
    exit
  fi
else
  echo "[$FEDERATED_IDENTITY_NAME] federated identity credential already exists in the [$AKS_RESOURCE_GROUP_NAME] resource group"
fi&lt;/LI-CODE&gt;
&lt;P&gt;The Bash script performs the following steps:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;It sources variables from two files:&amp;nbsp;../00-variables.sh&amp;nbsp;and&amp;nbsp;./00-variables.sh.&lt;/LI&gt;
&lt;LI&gt;It checks if the specified resource group exists. If not, it creates the resource group.&lt;/LI&gt;
&lt;LI&gt;It checks if the specified managed identity exists within the resource group. If not, it creates a user-assigned managed identity.&lt;/LI&gt;
&lt;LI&gt;It retrieves the&amp;nbsp;principalId&amp;nbsp;and&amp;nbsp;clientId&amp;nbsp;of the managed identity.&lt;/LI&gt;
&lt;LI&gt;It retrieves the&amp;nbsp;id&amp;nbsp;of the Azure Key Vault resource.&lt;/LI&gt;
&lt;LI&gt;It assigns the&amp;nbsp;Key Vault Secrets User&amp;nbsp;role to the managed identity with the Azure Key Vault as the scope.&lt;/LI&gt;
&lt;LI&gt;It checks if the specified Kubernetes namespace exists. If not, it creates the namespace.&lt;/LI&gt;
&lt;LI&gt;It checks if a specified Kubernetes service account exists within the namespace. If not, it creates the service account with the annotations and labels required by&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview" target="_blank" rel="noopener"&gt;Microsoft Entra Workload ID&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;It checks if a specified federated identity credential exists within the resource group. If not, it retrieves the OIDC Issuer URL of the specified AKS cluster and creates the federated identity credential.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;You are now ready to explore each technique in detail.&lt;/P&gt;
&lt;H2&gt;Hands-On Lab: Use Microsoft Entra Workload ID with Azure Kubernetes Service (AKS)&lt;/H2&gt;
&lt;P&gt;Workloads deployed on an Azure Kubernetes Services (AKS) cluster require Microsoft Entra application credentials or managed identities to access Microsoft Entra protected resources, such as Azure Key Vault and Microsoft Graph.&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/active-directory/develop/workload-identities-overview" target="_blank" rel="noopener"&gt;Microsoft Entra Workload ID&lt;/A&gt;&amp;nbsp;integrates with Kubernetes capabilities to federate with external identity providers.&lt;/P&gt;
&lt;P&gt;To enable pods to use a Kubernetes identity, Microsoft Entra Workload ID utilizes&amp;nbsp;&lt;A href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#serviceaccount-token-volume-projection" target="_blank" rel="noopener"&gt;Service Account Token Volume Projection&lt;/A&gt;&amp;nbsp;(service account). This allows for the issuance of a Kubernetes token, and&amp;nbsp;&lt;A href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens" target="_blank" rel="noopener"&gt;OIDC federation&lt;/A&gt;&amp;nbsp;enables secure access to Azure resources with Microsoft Entra ID, based on annotated service accounts.&lt;/P&gt;
&lt;P&gt;Utilizing the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview?tabs=dotnet#azure-identity-client-libraries" target="_blank" rel="noopener"&gt;Azure Identity client libraries&lt;/A&gt;&amp;nbsp;or the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/active-directory/develop/msal-overview" target="_blank" rel="noopener"&gt;Microsoft Authentication Library&lt;/A&gt;&amp;nbsp;(MSAL) collection, alongside&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/active-directory/develop/application-model#register-an-application" target="_blank" rel="noopener"&gt;application registration&lt;/A&gt;, Microsoft Entra Workload ID seamlessly authenticates and provides access to Azure cloud resources for your workload.&lt;/P&gt;
&lt;P&gt;You can create a user-assigned managed identity for the workload, create federated credentials, and assign the proper permissions to it to read secrets from the source Key Vault using the&amp;nbsp;&lt;A href="https://github.com/paolosalvatori/aks-key-vault#create-managed-identity-and-federated-identity-credential" target="_blank" rel="noopener"&gt;create-managed-identity.sh&lt;/A&gt;&amp;nbsp;Bash script. Then, you can run the following Bash script to retrieve the URL of the Azure Key Vault endpoint and then starts a demo pod in the&amp;nbsp;workload-id-test&amp;nbsp;namespace. The pod receives two parameters via environment variables:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;KEYVAULT_URL: The Azure Key Vault endpoint URL.&lt;/LI&gt;
&lt;LI&gt;SECRET_NAME: The name of a secret stored in Azure Key Vault.&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="bash"&gt;#/bin/bash

# Variables
source ../00-variables.sh
source ./00-variables.sh

# Retrieve the Azure Key Vault URL
echo "Retrieving the [$KEY_VAULT_NAME] key vault URL..."
KEYVAULT_URL=$(az keyvault show \
  --name $KEY_VAULT_NAME \
  --query properties.vaultUri \
  --output tsv)

if [[ -n $KEYVAULT_URL ]]; then
  echo "[$KEYVAULT_URL] key vault URL successfully retrieved"
else
  echo "Failed to retrieve the [$KEY_VAULT_NAME] key vault URL"
  exit
fi

# Create the pod
echo "Creating the [$POD_NAME] pod in the [$NAMESPACE] namespace..."
cat &amp;lt;&amp;lt;EOF | kubectl apply -n $NAMESPACE -f -
apiVersion: v1
kind: Pod
metadata:
  name: $POD_NAME
  labels:
    azure.workload.identity/use: "true"
spec:
  serviceAccountName: $SERVICE_ACCOUNT_NAME
  containers:
    - image: ghcr.io/azure/azure-workload-identity/msal-net:latest
      name: oidc
      env:
      - name: KEYVAULT_URL
        value: $KEYVAULT_URL
      - name: SECRET_NAME
        value: ${SECRETS[0]}
  nodeSelector:
    kubernetes.io/os: linux
EOF
exit&lt;/LI-CODE&gt;
&lt;P&gt;Below you can read the C# code of the sample application that uses the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/entra/identity-platform/msal-overview" target="_blank" rel="noopener"&gt;Microsoft Authentication Library (MSAL)&lt;/A&gt;&amp;nbsp;to acquire a security token to access Key Vault and read the value of a secret.&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;// &amp;lt;directives&amp;gt;
using System;
using System.Threading;
using Azure.Security.KeyVault.Secrets;
// &amp;lt;directives&amp;gt;

namespace akvdotnet
{
    public class Program
    {
        static void Main(string[] args)
        {
            Program P = new Program();
            string keyvaultURL = Environment.GetEnvironmentVariable("KEYVAULT_URL");
            if (string.IsNullOrEmpty(keyvaultURL)) {
                Console.WriteLine("KEYVAULT_URL environment variable not set");
                return;
            }

            string secretName = Environment.GetEnvironmentVariable("SECRET_NAME");
            if (string.IsNullOrEmpty(secretName)) {
                Console.WriteLine("SECRET_NAME environment variable not set");
                return;
            }

            SecretClient client = new SecretClient(
                new Uri(keyvaultURL),
                new MyClientAssertionCredential());

            while (true)
            {
                Console.WriteLine($"{Environment.NewLine}START {DateTime.UtcNow} ({Environment.MachineName})");

                // &amp;lt;getsecret&amp;gt;
                var keyvaultSecret = client.GetSecret(secretName).Value;
                Console.WriteLine("Your secret is " + keyvaultSecret.Value);

                // sleep and retry periodically
                Thread.Sleep(600000);
            }
        }
    }
}

public class MyClientAssertionCredential : TokenCredential
{
    private readonly IConfidentialClientApplication _confidentialClientApp;
    private DateTimeOffset _lastRead;
    private string _lastJWT = null;

    public MyClientAssertionCredential()
    {
        // &amp;lt;authentication&amp;gt;
        // Microsoft Entra ID Workload Identity webhook will inject the following env vars
        // 	AZURE_CLIENT_ID with the clientID set in the service account annotation
        // 	AZURE_TENANT_ID with the tenantID set in the service account annotation. If not defined, then
        //  the tenantID provided via azure-wi-webhook-config for the webhook will be used.
        //  AZURE_AUTHORITY_HOST is the Microsoft Entra authority host. It is https://login.microsoftonline.com" for the public cloud.
        // 	AZURE_FEDERATED_TOKEN_FILE is the service account token path
        var clientID = Environment.GetEnvironmentVariable("AZURE_CLIENT_ID");
        var tokenPath = Environment.GetEnvironmentVariable("AZURE_FEDERATED_TOKEN_FILE");
        var tenantID = Environment.GetEnvironmentVariable("AZURE_TENANT_ID");
        var host = Environment.GetEnvironmentVariable("AZURE_AUTHORITY_HOST");

        _confidentialClientApp = ConfidentialClientApplicationBuilder
                .Create(clientID)
                .WithAuthority(host, tenantID) 
                .WithClientAssertion(() =&amp;gt; ReadJWTFromFSOrCache(tokenPath))   // ReadJWTFromFS should always return a non-expired JWT 
                .WithCacheOptions(CacheOptions.EnableSharedCacheOptions)      // cache the the AAD tokens in memory                
                .Build();
    }

    public override AccessToken GetToken(TokenRequestContext requestContext, CancellationToken cancellationToken)
    {
        return GetTokenAsync(requestContext, cancellationToken).GetAwaiter().GetResult();
    }

    public override async ValueTask&amp;lt;AccessToken&amp;gt; GetTokenAsync(TokenRequestContext requestContext, CancellationToken cancellationToken)
    {
        AuthenticationResult result = null;
        try
        {
            result = await _confidentialClientApp
                        .AcquireTokenForClient(requestContext.Scopes)
                        .ExecuteAsync();
        }
        catch (MsalUiRequiredException ex)
        {
            // The application doesn't have sufficient permissions.
            // - Did you declare enough app permissions during app creation?
            // - Did the tenant admin grant permissions to the application?
        }
        catch (MsalServiceException ex) when (ex.Message.Contains("AADSTS70011"))
        {
            // Invalid scope. The scope has to be in the form "https://resourceurl/.default"
            // Mitigation: Change the scope to be as expected.
        }
        return new AccessToken(result.AccessToken, result.ExpiresOn);
    }

    /// &amp;lt;summary&amp;gt;
    /// Read the JWT from the file system, but only do this every few minutes to avoid heavy I/O.
    /// The JWT lifetime is anywhere from 1 to 24 hours, so we can safely cache the value for a few minutes.
    /// &amp;lt;/summary&amp;gt;
    private string ReadJWTFromFSOrCache(string tokenPath)
    {
        // read only once every 5 minutes
        if (_lastJWT == null ||
            DateTimeOffset.UtcNow.Subtract(_lastRead) &amp;gt; TimeSpan.FromMinutes(5))
        {            
            _lastRead = DateTimeOffset.UtcNow;
            _lastJWT = System.IO.File.ReadAllText(tokenPath);
        }

        return _lastJWT;
    }
}&lt;/LI-CODE&gt;
&lt;P&gt;The&amp;nbsp;Program&amp;nbsp;class contains the&amp;nbsp;Main&amp;nbsp;method, which initializes a&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/dotnet/api/azure.security.keyvault.secrets.secretclient?view=azure-dotnet" target="_blank" rel="noopener"&gt;SecretClient&lt;/A&gt;&amp;nbsp;object using a custom credential class&amp;nbsp;MyClientAssertionCredential. The&amp;nbsp;Main&amp;nbsp;method code retrieves the Key Vault URL and secret name from environment variables, checks if they are set, and then enters an infinite loop where it fetches the secret from Key Vault and prints it to the console every 10 minutes.&lt;/P&gt;
&lt;P&gt;The&amp;nbsp;MyClientAssertionCredential&amp;nbsp;class extends&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/dotnet/api/azure.core.tokencredential?view=azure-dotnet" target="_blank" rel="noopener"&gt;TokenCredential&lt;/A&gt;&amp;nbsp;and is responsible for authenticating with Microsoft Entra ID using a client assertion. It reads necessary environment variables for client ID, tenant ID, authority host, and federated token file path from the respective environment variables injected by&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview" target="_blank" rel="noopener"&gt;Microsoft Entra Workload ID&lt;/A&gt;into the pod.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;
&lt;TABLE&gt;
&lt;THEAD&gt;
&lt;TR&gt;
&lt;TH&gt;Environment variable&lt;/TH&gt;
&lt;TH&gt;Description&lt;/TH&gt;
&lt;/TR&gt;
&lt;/THEAD&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD&gt;AZURE_AUTHORITY_HOST&lt;/TD&gt;
&lt;TD&gt;The Microsoft Entra ID endpoint (&lt;A href="https://login.microsoftonline.com/" target="_blank" rel="noopener"&gt;https://login.microsoftonline.com/&lt;/A&gt;).&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;AZURE_CLIENT_ID&lt;/TD&gt;
&lt;TD&gt;The client ID of the Microsoft Entra ID registered application or user-assigned managed identity.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;AZURE_TENANT_ID&lt;/TD&gt;
&lt;TD&gt;The tenant ID of the Microsoft Entra ID registered application or user-assigned managed identity.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;AZURE_FEDERATED_TOKEN_FILE&lt;/TD&gt;
&lt;TD&gt;The path of the projected service account token file.&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The class uses the &lt;A href="https://learn.microsoft.com/en-us/dotnet/api/microsoft.identity.client.confidentialclientapplicationbuilder?view=msal-dotnet-latest" target="_blank" rel="noopener"&gt;ConfidentialClientApplicationBuilder&lt;/A&gt;&amp;nbsp;to create a confidential client application that acquires tokens for the specified scopes. The&amp;nbsp;ReadJWTFromFSOrCache&amp;nbsp;method reads the JWT from the file system and caches it to minimize I/O operations. You can find the code, Dockerfile, and container image links for other programming languages in the table below.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;
&lt;TABLE&gt;
&lt;THEAD&gt;
&lt;TR&gt;
&lt;TH&gt;Language&lt;/TH&gt;
&lt;TH&gt;Library&lt;/TH&gt;
&lt;TH&gt;Code&lt;/TH&gt;
&lt;TH&gt;Image&lt;/TH&gt;
&lt;TH&gt;Example&lt;/TH&gt;
&lt;TH&gt;Has Windows Images&lt;/TH&gt;
&lt;/TR&gt;
&lt;/THEAD&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD&gt;&lt;STRONG&gt;C#&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://github.com/AzureAD/microsoft-authentication-library-for-dotnet" target="_blank" rel="noopener"&gt;microsoft-authentication-library-for-dotnet&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-net/akvdotnet" target="_blank" rel="noopener"&gt;Link&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;ghcr.io/azure/azure-workload-identity/msal-net&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-net/akvdotnet" target="_blank" rel="noopener"&gt;Link&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;✅&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&lt;STRONG&gt;Go&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://github.com/AzureAD/microsoft-authentication-library-for-go" target="_blank" rel="noopener"&gt;microsoft-authentication-library-for-go&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-go" target="_blank" rel="noopener"&gt;Link&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;ghcr.io/azure/azure-workload-identity/msal-go&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-go" target="_blank" rel="noopener"&gt;Link&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;✅&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&lt;STRONG&gt;Java&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://github.com/AzureAD/microsoft-authentication-library-for-java" target="_blank" rel="noopener"&gt;microsoft-authentication-library-for-java&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-java" target="_blank" rel="noopener"&gt;Link&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;ghcr.io/azure/azure-workload-identity/msal-java&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-java" target="_blank" rel="noopener"&gt;Link&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;❌&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&lt;STRONG&gt;Node.JS&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://github.com/AzureAD/microsoft-authentication-library-for-js" target="_blank" rel="noopener"&gt;microsoft-authentication-library-for-js&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-node" target="_blank" rel="noopener"&gt;Link&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;ghcr.io/azure/azure-workload-identity/msal-node&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-node" target="_blank" rel="noopener"&gt;Link&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;❌&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&lt;STRONG&gt;Python&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://github.com/AzureAD/microsoft-authentication-library-for-python" target="_blank" rel="noopener"&gt;microsoft-authentication-library-for-python&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-python" target="_blank" rel="noopener"&gt;Link&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;ghcr.io/azure/azure-workload-identity/msal-python&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-python" target="_blank" rel="noopener"&gt;Link&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;❌&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The application code retrieves the secret value specified by the SECRET_NAME parameter and logs it to the standard output. Therefore, you can use the following Bash script to display the logs generated by the pod.&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;#!/bin/bash

# Variables
source ../00-variables.sh
source ./00-variables.sh

# Check if the pod exists
POD=$(kubectl get pod $POD_NAME -n $NAMESPACE -o 'jsonpath={.metadata.name}')

if [[ -z $POD ]]; then
    echo "No [$POD_NAME] pod found in [$NAMESPACE] namespace."
    exit
fi

# Read logs from the pod
echo "Reading logs from [$POD_NAME] pod..."
kubectl logs $POD -n $NAMESPACE&lt;/LI-CODE&gt;
&lt;P&gt;The script should generate an output similar to the following:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;Reading logs from [demo-pod] pod...

START 02/10/2025 11:01:36 (demo-pod)
Your secret is admin&lt;/LI-CODE&gt;
&lt;P&gt;Alternatively, you can use the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/entra/identity/managed-identities-azure-resources/reference-managed-identity-libraries" target="_blank" rel="noopener"&gt;Azure Identity client libraries&lt;/A&gt;&amp;nbsp;in your workload code to acquire a security token from Microsoft Entra ID using the credentials of the registered application or user-assigned managed identity federated with the Kubernetes service account. You can choose one of the following approaches:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Use&amp;nbsp;DefaultAzureCredential, which attempts to use the&amp;nbsp;WorkloadIdentityCredential.&lt;/LI&gt;
&lt;LI&gt;Create a&amp;nbsp;ChainedTokenCredential&amp;nbsp;instance that includes&amp;nbsp;WorkloadIdentityCredential.&lt;/LI&gt;
&lt;LI&gt;Use&amp;nbsp;WorkloadIdentityCredential&amp;nbsp;directly.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The following table provides the minimum package version required for each language ecosystem's client library.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;
&lt;TABLE&gt;
&lt;THEAD&gt;
&lt;TR&gt;
&lt;TH&gt;Ecosystem&lt;/TH&gt;
&lt;TH&gt;Library&lt;/TH&gt;
&lt;TH&gt;Minimum version&lt;/TH&gt;
&lt;/TR&gt;
&lt;/THEAD&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD&gt;.NET&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://learn.microsoft.com/en-us/dotnet/api/overview/azure/identity-readme" target="_blank" rel="noopener"&gt;Azure.Identity&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;1.9.0&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;C++&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://github.com/Azure/azure-sdk-for-cpp/blob/main/sdk/identity/azure-identity/README.md" target="_blank" rel="noopener"&gt;azure-identity-cpp&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;1.6.0&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Go&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity" target="_blank" rel="noopener"&gt;azidentity&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;1.3.0&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Java&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://learn.microsoft.com/en-us/java/api/overview/azure/identity-readme" target="_blank" rel="noopener"&gt;azure-identity&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;1.9.0&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Node.js&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://learn.microsoft.com/en-us/javascript/api/overview/azure/identity-readme" target="_blank" rel="noopener"&gt;@azure/identity&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;3.2.0&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Python&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://learn.microsoft.com/en-us/python/api/overview/azure/identity-readme" target="_blank" rel="noopener"&gt;azure-identity&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;1.13.0&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the following code samples, DefaultAzureCredential is used. This credential type uses the environment variables injected by the Azure Workload Identity mutating webhook to authenticate with Azure Key Vault.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview?tabs=dotnet#tabpanel_1_dotnet" target="_blank" rel="noopener"&gt;.NET&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview?tabs=dotnet#tabpanel_1_cpp" target="_blank" rel="noopener"&gt;C++&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview?tabs=dotnet#tabpanel_1_go" target="_blank" rel="noopener"&gt;Go&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview?tabs=dotnet#tabpanel_1_java" target="_blank" rel="noopener"&gt;Java&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview?tabs=dotnet#tabpanel_1_javascript" target="_blank" rel="noopener"&gt;Node.js&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview?tabs=dotnet#tabpanel_1_python" target="_blank" rel="noopener"&gt;Python&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Here is a C# code sample that uses&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/dotnet/api/azure.identity.defaultazurecredential?view=azure-dotnet" target="_blank" rel="noopener"&gt;DefaultAzureCredential&lt;/A&gt;&amp;nbsp;for user credentials.&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;using Azure.Identity;
using Azure.Security.KeyVault.Secrets;

string keyVaultUrl = Environment.GetEnvironmentVariable("KEYVAULT_URL");
string secretName = Environment.GetEnvironmentVariable("SECRET_NAME");

var client = new SecretClient(
    new Uri(keyVaultUrl),
    new DefaultAzureCredential());

KeyVaultSecret secret = await client.GetSecretAsync(secretName);&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Hands-On Lab: Azure Key Vault Provider for Secrets Store CSI Driver in AKS&lt;/H2&gt;
&lt;P&gt;The Secrets Store Container Storage Interface (CSI) Driver on Azure Kubernetes Service (AKS) provides various methods of identity-based access to your Azure Key Vault. You can use one of the following access methods:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-identity-access?tabs=azure-portal&amp;amp;pivots=access-with-service-connector#create-a-service-connection-in-aks-with-service-connector" target="_blank" rel="noopener"&gt;Service Connector with managed identity&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-identity-access?tabs=azure-portal&amp;amp;pivots=access-with-a-microsoft-entra-workload-identity#create-a-service-connection-in-aks-with-service-connector" target="_blank" rel="noopener"&gt;Workload ID&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-identity-access?tabs=azure-portal&amp;amp;pivots=access-with-a-user-assigned-managed-identity#create-a-service-connection-in-aks-with-service-connector" target="_blank" rel="noopener"&gt;User-assigned managed identity&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This article outlines focus on the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-identity-access?tabs=azure-portal&amp;amp;pivots=access-with-a-microsoft-entra-workload-identity#create-a-service-connection-in-aks-with-service-connector" target="_blank" rel="noopener"&gt;Workload ID&lt;/A&gt;&amp;nbsp;option. Please see the documentantion for the other methods.&lt;/P&gt;
&lt;P&gt;Run the following Bash script to upgrade your AKS cluster with the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver" target="_blank" rel="noopener"&gt;Azure Key Vault provider for Secrets Store CSI Driver&lt;/A&gt;&amp;nbsp;capability using the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/cli/azure/aks#az-aks-enable-addons" target="_blank" rel="noopener"&gt;az aks enable-addons&lt;/A&gt;&amp;nbsp;command to enable the&amp;nbsp;azure-keyvault-secrets-provider&amp;nbsp;add-on. The add-on creates a user-assigned managed identity you can use to authenticate to your key vault. Alternatively, you can use a bring-your-own user-assigned managed identity.&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;#!/bin/bash

# Variables
source ../00-variables.sh
source ./00-variables.sh

# Enable Addon
echo "Checking if the [azure-keyvault-secrets-provider] addon is enabled in the [$AKS_NAME] AKS cluster..."
az aks addon show \
  --addon azure-keyvault-secrets-provider \
  --name $AKS_NAME \
  --resource-group $AKS_RESOURCE_GROUP_NAME &amp;amp;&amp;gt;/dev/null

if [[ $? != 0 ]]; then
  echo "The [azure-keyvault-secrets-provider] addon is not enabled in the [$AKS_NAME] AKS cluster"
  echo "Enabling the [azure-keyvault-secrets-provider] addon in the [$AKS_NAME] AKS cluster..."

  az aks addon enable \
    --addon azure-keyvault-secrets-provider \
    --enable-secret-rotation \
    --name $AKS_NAME \
    --resource-group $AKS_RESOURCE_GROUP_NAME
else
  echo "The [azure-keyvault-secrets-provider] addon is already enabled in the [$AKS_NAME] AKS cluster"
fi&lt;/LI-CODE&gt;
&lt;P&gt;You can create a user-assigned managed identity for the workload, create federated credentials, and assign the proper permissions to it to read secrets from the source Key Vault using the&amp;nbsp;&lt;A href="https://github.com/paolosalvatori/aks-key-vault#create-managed-identity-and-federated-identity-credential" target="_blank" rel="noopener"&gt;create-managed-identity.sh&lt;/A&gt;&amp;nbsp;Bash script. The next step is creating an instance of the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/aksarc/secrets-store-csi-driver#create-and-apply-your-own-secretproviderclass-object" target="_blank" rel="noopener"&gt;SecretProviderClass&lt;/A&gt;&amp;nbsp;custom resource in your workload namespace. The&amp;nbsp;SecretProviderClass&amp;nbsp;is a namespaced resource in Secrets Store CSI Driver that is used to provide driver configurations and provider-specific parameters to the CSI driver. The&amp;nbsp;SecretProviderClass&amp;nbsp;allows you to indicate the client ID of a user-assigned managed identity used to read secret material from Key Vault, and the list of secrets, keys, and certificates to read from Key Vault. For each object, you can optionally indicate an alternative name or alias using the&amp;nbsp;objectAlias&amp;nbsp;property. In this case, the driver will create a file with the alias as the name. You can even indicate a specific version of a secret, key, or certificate. You can retrieve the latest version just by assigning the&amp;nbsp;objectVersion&amp;nbsp;the null value or empty string.&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;#/bin/bash

# For more information, see:
# https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver
# https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-identity-access

# Variables
source ../00-variables.sh
source ./00-variables.sh

# Get the managed identity client id
echo "Retrieving clientId for [$MANAGED_IDENTITY_NAME] managed identity..."
CLIENT_ID=$(az identity show \
  --name $MANAGED_IDENTITY_NAME \
  --resource-group $AKS_RESOURCE_GROUP_NAME \
  --query clientId \
  --output tsv)

if [[ -n $CLIENT_ID ]]; then
  echo "[$CLIENT_ID] clientId  for the [$MANAGED_IDENTITY_NAME] managed identity successfully retrieved"
else
  echo "Failed to retrieve clientId for the [$MANAGED_IDENTITY_NAME] managed identity"
  exit
fi

# Create the SecretProviderClass for the secret store CSI driver with Azure Key Vault provider
echo "Creating the SecretProviderClass for the secret store CSI driver with Azure Key Vault provider..."
cat &amp;lt;&amp;lt;EOF | kubectl apply -n $NAMESPACE -f -
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
  name:  $SECRET_PROVIDER_CLASS_NAME
spec:
  provider: azure
  parameters:
    clientID: "$CLIENT_ID"
    keyvaultName: "$KEY_VAULT_NAME"
    tenantId: "$TENANT_ID"
    objects:  |
      array:
        - |
          objectName: username
          objectAlias: username
          objectType: secret        
          objectVersion: ""
        - |
          objectName: password
          objectAlias: password
          objectType: secret
          objectVersion: ""
EOF&lt;/LI-CODE&gt;
&lt;P&gt;The Bash script creates a&amp;nbsp;SecretProviderClass&amp;nbsp;custom resource configured to read the latest value of the&amp;nbsp;username&amp;nbsp;and&amp;nbsp;password&amp;nbsp;secrets from the source Key Vault. You can now use the following Bash script to deploy the sample application.&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;#/bin/bash

# Variables
source ../00-variables.sh
source ./00-variables.sh

# Create the pod
echo "Creating the [$POD_NAME] pod in the [$NAMESPACE] namespace..."
cat &amp;lt;&amp;lt;EOF | kubectl apply -n $NAMESPACE -f -
kind: Pod
apiVersion: v1
metadata:
  name: $POD_NAME
  labels:
    azure.workload.identity/use: "true"
spec:
  serviceAccountName: $SERVICE_ACCOUNT_NAME
  containers:
    - name: nginx
      image: nginx
      resources:
        requests:
          memory: "32Mi"
          cpu: "50m"
        limits:
          memory: "64Mi"
          cpu: "100m"
      volumeMounts:
        - name: secrets-store
          mountPath: "/mnt/secrets"
          readOnly: true
  volumes:
    - name: secrets-store
      csi:
        driver: secrets-store.csi.k8s.io
        readOnly: true
        volumeAttributes:
          secretProviderClass: "$SECRET_PROVIDER_CLASS_NAME"
EOF&lt;/LI-CODE&gt;
&lt;P&gt;The YAML manifest contains a volume definition called&amp;nbsp;secrets-store&amp;nbsp;that uses the&amp;nbsp;&lt;A href="https://secrets-store-csi-driver.sigs.k8s.io/" target="_blank" rel="noopener"&gt;secrets-store.csi.k8s.io&lt;/A&gt;&amp;nbsp;Secrets Store CSI Driver and references the&amp;nbsp;SecretProviderClass&amp;nbsp;resource created in the previous step by name. The YAML configuration defines a&amp;nbsp;Pod&amp;nbsp;with a container named&amp;nbsp;nginx&amp;nbsp;that mounts the&amp;nbsp;secrets-store&amp;nbsp;volume in read-only mode. On pod start and restart, the driver will communicate with the provider using gRPC to retrieve the secret content from the Key Vault resource you have specified in the&amp;nbsp;SecretProviderClass&amp;nbsp;custom resource.&lt;/P&gt;
&lt;P&gt;You can run the following Bash script to print the value of each files, one for each secret specified in the&amp;nbsp;SecretProviderClass&amp;nbsp;custom resource, from the&amp;nbsp;/mnt/secrets&amp;nbsp;mounted volume.&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;#!/bin/bash

# Variables
source ../00-variables.sh
source ./00-variables.sh

# Check if the pod exists
POD=$(kubectl get pod $POD_NAME -n $NAMESPACE -o 'jsonpath={.metadata.name}')

if [[ -z $POD ]]; then
    echo "No [$POD_NAME] pod found in [$NAMESPACE] namespace."
    exit
fi

# List secrets from /mnt/secrets volume
echo "Reading files from [/mnt/secrets] volume in [$POD_NAME] pod..."
FILES=$(kubectl exec $POD -n $NAMESPACE -- ls /mnt/secrets)

# Retrieve secrets from /mnt/secrets volume
for FILE in ${FILES[@]}
do
    echo "Retrieving [$FILE] secret from [$KEY_VAULT_NAME] key vault..."
    kubectl exec $POD --stdin --tty -n $NAMESPACE -- cat /mnt/secrets/$FILE;echo;sleep 1
done &lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Hands-On Lab: Dapr Secret Store for Key Vault&lt;/H2&gt;
&lt;P&gt;&lt;A href="https://docs.dapr.io/concepts/overview/" target="_blank" rel="noopener"&gt;Distributed Application Runtime (Dapr)&lt;/A&gt;&amp;nbsp;is is a versatile and event-driven runtime that can help you write and implement simple, portable, resilient, and secured microservices. Dapr works together with Kubernetes clusters such as&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/what-is-aks" target="_blank" rel="noopener"&gt;Azure Kubernetes Services (AKS)&lt;/A&gt;&amp;nbsp;and&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/container-apps/overview" target="_blank" rel="noopener"&gt;Azure Container Apps&lt;/A&gt;&amp;nbsp;as an abstraction layer to provide a low-maintenance and scalable platform.&lt;/P&gt;
&lt;P&gt;The first step is running the following script to check if Dapr is actually installed on your AKS cluster, and if not, install the Dapr extension. For more information, see&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/dapr?tabs=cli" target="_blank" rel="noopener"&gt;Install the Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes&lt;/A&gt;.&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;#!/bin/bash

# Variables
source ../00-variables.sh
source ./00-variables.sh

# Install AKS cluster extension in your Azure subscription
echo "Check if the [k8s-extension] is already installed in the [$SUBSCRIPTION_NAME] subscription..."
az extension show --name k8s-extension &amp;amp;&amp;gt;/dev/null

if [[ $? != 0 ]]; then
  echo "No [k8s-extension] extension actually exists in the [$SUBSCRIPTION_NAME] subscription"
  echo "Installing [k8s-extension] extension in the [$SUBSCRIPTION_NAME] subscription..."

  # install the extension
  az extension add --name k8s-extension

  if [[ $? == 0 ]]; then
    echo "[k8s-extension] extension successfully installed in the [$SUBSCRIPTION_NAME] subscription"
  else
    echo "Failed to install [k8s-extension] extension in the [$SUBSCRIPTION_NAME] subscription"
    exit
  fi
else
  echo "[k8s-extension] extension already exists in the [$SUBSCRIPTION_NAME] subscription"
fi

# Checking if the the KubernetesConfiguration resource provider is registered in your Azure subscription
echo "Checking if the [Microsoft.KubernetesConfiguration] resource provider is already registered in the [$SUBSCRIPTION_NAME] subscription..."
az provider show --namespace Microsoft.KubernetesConfiguration &amp;amp;&amp;gt;/dev/null

if [[ $? != 0 ]]; then
  echo "No [Microsoft.KubernetesConfiguration] resource provider actually exists in the [$SUBSCRIPTION_NAME] subscription"
  echo "Registering [Microsoft.KubernetesConfiguration] resource provider in the [$SUBSCRIPTION_NAME] subscription..."

  # register the resource provider
  az provider register --namespace Microsoft.KubernetesConfiguration

  if [[ $? == 0 ]]; then
    echo "[Microsoft.KubernetesConfiguration] resource provider successfully registered in the [$SUBSCRIPTION_NAME] subscription"
  else
    echo "Failed to register [Microsoft.KubernetesConfiguration] resource provider in the [$SUBSCRIPTION_NAME] subscription"
    exit
  fi
else
  echo "[Microsoft.KubernetesConfiguration] resource provider already exists in the [$SUBSCRIPTION_NAME] subscription"
fi

# Check if the ExtenstionTypes feature is registered in your Azure subscription
echo "Checking if the [ExtensionTypes] feature is already registered in the [Microsoft.KubernetesConfiguration] namespace..."
az feature show --namespace Microsoft.KubernetesConfiguration --name ExtensionTypes &amp;amp;&amp;gt;/dev/null

if [[ $? != 0 ]]; then
  echo "No [ExtensionTypes] feature actually exists in the [Microsoft.KubernetesConfiguration] namespace"
  echo "Registering [ExtensionTypes] feature in the [Microsoft.KubernetesConfiguration] namespace..."

  # register the feature
  az feature register --namespace Microsoft.KubernetesConfiguration --name ExtensionTypes

  if [[ $? == 0 ]]; then
    echo "[ExtensionTypes] feature successfully registered in the [Microsoft.KubernetesConfiguration] namespace"
  else
    echo "Failed to register [ExtensionTypes] feature in the [Microsoft.KubernetesConfiguration] namespace"
    exit
  fi
else
  echo "[ExtensionTypes] feature already exists in the [Microsoft.KubernetesConfiguration] namespace"
fi

# Check if Dapr extension is installed on your AKS cluster
echo "Checking if the [Dapr] extension is already installed on the [$AKS_NAME] AKS cluster..."
az k8s-extension show \
  --name dapr \
  --cluster-name $AKS_NAME \
  --resource-group $AKS_RESOURCE_GROUP_NAME \
  --cluster-type managedClusters &amp;amp;&amp;gt;/dev/null

if [[ $? != 0 ]]; then
  echo "No [Dapr] extension actually exists on the [$AKS_NAME] AKS cluster"
  echo "Installing [Dapr] extension on the [$AKS_NAME] AKS cluster..."

  # install the extension
  az k8s-extension create \
    --name dapr \
    --cluster-name $AKS_NAME \
    --resource-group $AKS_RESOURCE_GROUP_NAME \
    --cluster-type managedClusters \
    --extension-type "Microsoft.Dapr" \
    --scope cluster \
    --release-namespace "dapr-system"

  if [[ $? == 0 ]]; then
    echo "[Dapr] extension successfully installed on the [$AKS_NAME] AKS cluster"
  else
    echo "Failed to install [Dapr] extension on the [$AKS_NAME] AKS cluster"
    exit
  fi
else
  echo "[Dapr] extension already exists on the [$AKS_NAME] AKS cluster"
fi&lt;/LI-CODE&gt;
&lt;P&gt;You can create a user-assigned managed identity for the workload, create federated credentials, and assign the proper permissions to it to read secrets from the source Key Vault using the&amp;nbsp;&lt;A href="https://github.com/paolosalvatori/aks-key-vault#create-managed-identity-and-federated-identity-credential" target="_blank" rel="noopener"&gt;create-managed-identity.sh&lt;/A&gt;&amp;nbsp;Bash script. Then, you can run the following Bash script to retrieve the&amp;nbsp;clientId&amp;nbsp;for the user-assigned managed identity used to access Key Vault and create a Dapr secret store component for the secret store CSI driver with Azure Key Vault provider. The YAML manifest of the Dapr component assigns the following values to the component metadata:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Key Vault name to the&amp;nbsp;vaultName&amp;nbsp;attribute.&lt;/LI&gt;
&lt;LI&gt;Client id of the user-assigned managed identity to the&amp;nbsp;azureClientId&amp;nbsp;attribute.&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="bash"&gt;#!/bin/bash

# Variables
source ../00-variables.sh
source ./00-variables.sh

# Get the managed identity client id
echo "Retrieving clientId for [$MANAGED_IDENTITY_NAME] managed identity..."
CLIENT_ID=$(az identity show \
  --name $MANAGED_IDENTITY_NAME \
  --resource-group $AKS_RESOURCE_GROUP_NAME \
  --query clientId \
  --output tsv)

if [[ -n $CLIENT_ID ]]; then
  echo "[$CLIENT_ID] clientId  for the [$MANAGED_IDENTITY_NAME] managed identity successfully retrieved"
else
  echo "Failed to retrieve clientId for the [$MANAGED_IDENTITY_NAME] managed identity"
  exit
fi

# Create the Dapr secret store for Azure Key Vault
echo "Creating the secret store for [$KEY_VAULT_NAME] Azure Key Vault..."
cat &amp;lt;&amp;lt;EOF | kubectl apply -n $NAMESPACE -f -
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: $SECRET_STORE_NAME
spec:
  type: secretstores.azure.keyvault
  version: v1
  metadata:
  - name: vaultName
    value: ${KEY_VAULT_NAME,,}
  - name: azureClientId
    value: $CLIENT_ID
EOF&lt;/LI-CODE&gt;
&lt;P&gt;The next step is deploying the demo application using the following Bash script. The service account used by the Kubernetes deployment is federated with the user-assigned managed identity. Aldo note that the deployment is configured to use Dapr via the following Kubernetes annotations:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;dapr.io/app-id: The unique ID of the application. Used for service discovery, state encapsulation and the pub/sub consumer ID.&lt;/LI&gt;
&lt;LI&gt;dapr.io/enabled: Setting this paramater to true injects the Dapr sidecar into the pod.&lt;/LI&gt;
&lt;LI&gt;dapr.io/app-port: This parameter tells Dapr which port your application is listening on.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;For more information on Dapr annotations, see&amp;nbsp;&lt;A href="https://docs.dapr.io/reference/arguments-annotations-overview/" target="_blank" rel="noopener"&gt;Dapr arguments and annotations for daprd, CLI, and Kubernetes&lt;/A&gt;.&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;#!/bin/bash

# Variables
source ../00-variables.sh
source ./00-variables.sh

# Check if the namespace exists in the cluster
RESULT=$(kubectl get namespace -o 'jsonpath={.items[?(@.metadata.name=="'$NAMESPACE'")].metadata.name'})

if [[ -n $RESULT ]]; then
  echo "[$NAMESPACE] namespace already exists in the cluster"
else
  echo "[$NAMESPACE] namespace does not exist in the cluster"
  echo "Creating [$NAMESPACE] namespace in the cluster..."
  kubectl create namespace $NAMESPACE
fi

# Create deployment
echo "Creating [$APP_NAME] deployment in the [$NAMESPACE] namespace..."
cat &amp;lt;&amp;lt;EOF | kubectl apply -n $NAMESPACE -f -
kind: Deployment
apiVersion: apps/v1
metadata:
  name: $APP_NAME
  labels:
    app: $APP_NAME
spec:
  replicas: 1
  selector:
    matchLabels:
      app: $APP_NAME
      azure.workload.identity/use: "true"
  template:
    metadata:
      labels:
        app: $APP_NAME
        azure.workload.identity/use: "true"
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "$APP_NAME"
        dapr.io/app-port: "80"
    spec:
      serviceAccountName: $SERVICE_ACCOUNT_NAME
      containers:
      - name: nginx
        image: nginx
        imagePullPolicy: Always
        ports:
          - containerPort: 80
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"
EOF&lt;/LI-CODE&gt;
&lt;P&gt;You can run the following Bash script to connect to the demo pod and print out the value of the two sample secrets stored in Key Vault.&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;#!/bin/bash

# Variables
source ../00-variables.sh
source ./00-variables.sh

# Get pod name
POD=$(kubectl get pod -n $NAMESPACE -o 'jsonpath={.items[].metadata.name}')

if [[ -z $POD ]]; then
    echo 'no pod found, please check the name of the deployment and namespace'
    exit
fi

# List secrets from /mnt/secrets volume       
for SECRET in ${SECRETS[@]}
do
    echo "Retrieving [$SECRET] secret from [$KEY_VAULT_NAME] key vault..."
    json=$(kubectl exec --stdin --tty -n $NAMESPACE -c $CONTAINER $POD \
        -- curl http://localhost:3500/v1.0/secrets/key-vault-secret-store/$SECRET;echo)
    echo $json | jq .
done&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Hands-On Lab: External Secrets Operator with Azure Key Vault&lt;/H2&gt;
&lt;P&gt;In this sectioon you will see the steps to configure the&amp;nbsp;&lt;A href="https://external-secrets.io/latest/" target="_blank" rel="noopener"&gt;External Secrets Operator&lt;/A&gt;&amp;nbsp;to use&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview" target="_blank" rel="noopener"&gt;Microsoft Entra Workload ID&lt;/A&gt;&amp;nbsp;to access an&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/key-vault/general/basic-concepts" target="_blank" rel="noopener"&gt;Azure Key Vault&lt;/A&gt;&amp;nbsp;resource. You can install the operator to your AKS cluster using Helm, as shown in the following Bash script:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;#!/bin/bash

# Variables
source ../00-variables.sh
source ./00-variables.sh

# Add the external secrets repository
helm repo add external-secrets https://charts.external-secrets.io

# Update local Helm chart repository cache
helm repo update

# Deploy external secrets via Helm
helm upgrade external-secrets external-secrets/external-secrets \
  --install \
  --namespace external-secrets \
  --create-namespace \
  --set installCRDs=true&lt;/LI-CODE&gt;
&lt;P&gt;Then, you can create a user-assigned managed identity for the workload, create federated credentials, and assign the proper permissions to it to read secrets from the source Key Vault using the&amp;nbsp;&lt;A href="https://github.com/paolosalvatori/aks-key-vault#create-managed-identity-and-federated-identity-credential" target="_blank" rel="noopener"&gt;create-managed-identity.sh&lt;/A&gt;&amp;nbsp;Bash script.&lt;/P&gt;
&lt;P&gt;Next, you can run the following Bash script to retrieve the&amp;nbsp;vaultUri&amp;nbsp;of your Key Vault resource and create a secret store custom resource. The YAML manifest of the secret store assigns the following values to the properties of the&amp;nbsp;azurekv&amp;nbsp;provider for Key Vault:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;authType:&amp;nbsp;WorkloadIdentity&amp;nbsp;configures the provider to utilize user-assigned managed identity with the proper permissions to access Key Vault.&lt;/LI&gt;
&lt;LI&gt;vaultUrl: Specifies the&amp;nbsp;vaultUri&amp;nbsp;Key Vault endpoint URL.&lt;/LI&gt;
&lt;LI&gt;serviceAccountRef.name: specifies the Kubernetes service account in the workload namespace that is federated with the user-assigned managed identity.&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="bash"&gt;#/bin/bash

# For more information, see:
# https://medium.com/@rcdinesh1/access-secrets-via-argocd-through-external-secrets-9173001be885
# https://external-secrets.io/latest/provider/azure-key-vault/

# Variables
source ../00-variables.sh
source ./00-variables.sh

# Get key vault URL
VAULT_URL=$(az keyvault show \
  --name $KEY_VAULT_NAME \
  --resource-group $KEY_VAULT_RESOURCE_GROUP_NAME \
  --query properties.vaultUri \
  --output tsv \
  --only-show-errors)

if [[ -z $VAULT_URL ]]; then
  echo "[$KEY_VAULT_NAME] key vault URL not found"
  exit
fi

# Create secret store
echo "Creating the [$SECRET_STORE_NAME] secret store..."
cat &amp;lt;&amp;lt;EOF | kubectl apply -n $NAMESPACE -f -
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
  name: $SECRET_STORE_NAME
spec:
  provider:
    azurekv:
      authType: WorkloadIdentity
      vaultUrl: "$VAULT_URL"
      serviceAccountRef:
        name: $SERVICE_ACCOUNT_NAME
EOF

# Get the secret store
kubectl get secretstore azure-store -n $NAMESPACE -o yaml&lt;/LI-CODE&gt;
&lt;P&gt;For more information on secret stores for Key Vault, see&amp;nbsp;&lt;A href="https://external-secrets.io/latest/provider/azure-key-vault/" target="_blank" rel="noopener"&gt;Azure Key Vault&lt;/A&gt;&amp;nbsp;in the official documentation of the External Secrets Operator.&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;#/bin/bash

# Variables
source ../00-variables.sh
source ./00-variables.sh

# Create secrets
cat &amp;lt;&amp;lt;EOF | kubectl apply -n $NAMESPACE -f -
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: $EXTERNAL_SECRET_NAME
spec:
  refreshInterval: 1h
  secretStoreRef:
    kind: SecretStore
    name:  $SECRET_STORE_NAME
  target:
    name: $EXTERNAL_SECRET_NAME
    creationPolicy: Owner
  dataFrom:
  # find all secrets starting with user
  - find:
      name:
        regexp: "^user"
  data:
  # explicit type and name of secret in the Azure KV
  - secretKey: password
    remoteRef:
      key: secret/password
EOF&lt;/LI-CODE&gt;
&lt;P&gt;Azure Key Vault manages different object types. The External Secrets Operator supports&amp;nbsp;keys,&amp;nbsp;secrets, and&amp;nbsp;certificates. Simply prefix the key with&amp;nbsp;key,&amp;nbsp;secret, or&amp;nbsp;cert&amp;nbsp;to retrieve the desired type (defaults to secret).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;
&lt;TABLE&gt;
&lt;THEAD&gt;
&lt;TR&gt;
&lt;TH&gt;Object Type&lt;/TH&gt;
&lt;TH&gt;Return Value&lt;/TH&gt;
&lt;/TR&gt;
&lt;/THEAD&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD&gt;secret&lt;/TD&gt;
&lt;TD&gt;The raw secret value.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;key&lt;/TD&gt;
&lt;TD&gt;A JWK which contains the public key. Azure Key Vault does not export the private key.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;certificate&lt;/TD&gt;
&lt;TD&gt;The raw CER contents of the x509 certificate.&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can create one or more ExternalSecret objects in your workload namespace to read keys, secrets, and certificates from Key Vault. To create a Kubernetes secret from the Azure Key Vault secret, you need to use Kind=ExternalSecret. You can retrieve keys, secrets, and certificates stored inside your Key Vault by setting a / prefixed type in the secret name. The default type is secret, but other supported values are cert and key. The following Bash script creates an ExternalSecret object configured to reference the secret store created in the previous step. The ExternalSecret object has two sections:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;dataFrom: This section contains a&amp;nbsp;find&amp;nbsp;element that uses regular expressions to retrieve any secret whose&amp;nbsp;name&amp;nbsp;starts with&amp;nbsp;user. For each secret, the Key Vault provider will create a key-value mapping in the&amp;nbsp;data&amp;nbsp;section of the Kubernetes secret using the name and value of the corresponding Key Vault secret.&lt;/LI&gt;
&lt;LI&gt;data: This section specifies the explicit type and name of the secrets, keys, and certificates to retrieve from Key Vault. In this sample, it tells the Key Vault provider to create a key-value mapping in the&amp;nbsp;data&amp;nbsp;section of the Kubernetes secret for the&amp;nbsp;password&amp;nbsp;Key Vault secret, using&amp;nbsp;password&amp;nbsp;as the key.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;For more information on external secrets, see&amp;nbsp;&lt;A href="https://external-secrets.io/latest/provider/azure-key-vault/" target="_blank" rel="noopener"&gt;Azure Key Vault&lt;/A&gt;&amp;nbsp;in the official documentation of the External Secrets Operator.&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;#/bin/bash

# Variables
source ../00-variables.sh
source ./00-variables.sh

# Create secrets
cat &amp;lt;&amp;lt;EOF | kubectl apply -n $NAMESPACE -f -
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: $EXTERNAL_SECRET_NAME
spec:
  refreshInterval: 1h
  secretStoreRef:
    kind: SecretStore
    name:  $SECRET_STORE_NAME
  target:
    name: $EXTERNAL_SECRET_NAME
    creationPolicy: Owner
  dataFrom:
  # find all secrets starting with user
  - find:
      name:
        regexp: "^user"
  data:
  # explicit type and name of secret in the Azure KV
  - secretKey: password
    remoteRef:
      key: secret/password
EOF&lt;/LI-CODE&gt;
&lt;P&gt;Finally, you can run the following Bash script to print the key-value mappings contained in the Kubernetes secret created by the External Secrets Operator.&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;#/bin/bash

# Variables
source ../00-variables.sh
source ./00-variables.sh

# Print secret values from the Kubernetes secret
json=$(kubectl get secret $EXTERNAL_SECRET_NAME -n $NAMESPACE -o jsonpath='{.data}')

# Decode the base64 of each value in the returned json
echo $json | jq -r 'to_entries[] | .key + ": " + (.value | @base64d)'&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Conclusions&lt;/H2&gt;
&lt;P&gt;In this article, we explored different methods for reading secrets from Azure Key Vault in&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/what-is-aks" target="_blank" rel="noopener"&gt;Azure Kubernetes Services (AKS)&lt;/A&gt;. Each technology offers its own advantages and considerations. Here's a summary:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview" target="_blank" rel="noopener"&gt;Microsoft Entra Workload ID&lt;/A&gt;:
&lt;UL&gt;
&lt;LI&gt;Transparently assigns a user-defined managed identity to a pod or deployment.&lt;/LI&gt;
&lt;LI&gt;Allows using Microsoft Entra integrated security and Azure RBAC for authorization.&lt;/LI&gt;
&lt;LI&gt;Provides secure access to Azure Key Vault and other managed services.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver" target="_blank" rel="noopener"&gt;Azure Key Vault provider for Secrets Store CSI Driver&lt;/A&gt;:
&lt;UL&gt;
&lt;LI&gt;Secrets, keys, and certificates can be accessed as files from mounted volumes.&lt;/LI&gt;
&lt;LI&gt;Optionally, Kubernetes secrets can be created to store keys, secrets, and certificates from Key Vault.&lt;/LI&gt;
&lt;LI&gt;No need for Azure-specific libraries to access secrets.&lt;/LI&gt;
&lt;LI&gt;Simplifies secret management with transparent integration.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.dapr.io/developing-applications/building-blocks/secrets/secrets-overview/" target="_blank" rel="noopener"&gt;Dapr Secret Store for Key Vault&lt;/A&gt;:
&lt;UL&gt;
&lt;LI&gt;Allows applications to retrieve secrets from various secret stores, including Azure Key Vault.&lt;/LI&gt;
&lt;LI&gt;Simplifies secret management with Dapr's consistent API.&lt;/LI&gt;
&lt;LI&gt;Supports Azure Key Vault integration with managed identities.&lt;/LI&gt;
&lt;LI&gt;Supports third-party secret stores, such as Azure Key Vault, AWS Secret Manager, and Google Key Management, and Hashicorp Vault.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://external-secrets.io/latest/" target="_blank" rel="noopener"&gt;External Secrets Operator&lt;/A&gt;:
&lt;UL&gt;
&lt;LI&gt;Manages secrets stored in external secret stores like Azure Key Vault, AWS Secret Manager, and Google Key Management, Hashicorp Vault, and more.&lt;/LI&gt;
&lt;LI&gt;Provides synchronization of Key Vault secrets into Kubernetes secrets.&lt;/LI&gt;
&lt;LI&gt;Simplifies secret management with Kubernetes-native integration.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Depending on your requirements and preferences, you can choose the method that best fits your use case. Each technology offers unique features and benefits to securely access and manage secrets in your AKS workloads. For more information and detailed documentation on each mechanism, refer to the provided resources in this article.&lt;/P&gt;</description>
      <pubDate>Tue, 11 Feb 2025 13:20:37 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-partners/four-methods-to-access-azure-key-vault-from-azure-kubernetes/m-p/4376662#M228</guid>
      <dc:creator>paolosalvatori</dc:creator>
      <dc:date>2025-02-11T13:20:37Z</dc:date>
    </item>
    <item>
      <title>Azure Virtual Machine: Centralized insights for smarter management</title>
      <link>https://techcommunity.microsoft.com/t5/azure-partners/azure-virtual-machine-centralized-insights-for-smarter/m-p/4366086#M227</link>
      <description>&lt;H2 style="font-size: large;"&gt;&lt;STRONG&gt;&lt;U&gt;Introduction&lt;/U&gt;&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;Managing Azure Virtual Machines (VMs) can be challenging without the right tools.&lt;/P&gt;
&lt;P&gt;There are several ways for &lt;A href="https://learn.microsoft.com/en-us/azure/virtual-machines/monitor-vm" target="_blank" rel="noopener"&gt;monitoring&lt;/A&gt;, some of which extend beyond the platform's native capabilities. These may include options like installing an agent or utilizing third-party products, though they often require additional setup and may involve extra costs.&lt;/P&gt;
&lt;P&gt;This workbook is designed to use the native platform capabilities to give you a clear and detailed view of your VMs, helping you make informed decisions confidently without any additional cost.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To get started, check out the&amp;nbsp;&lt;STRONG&gt;&lt;A href="https://github.com/dolevshor/Azure-VirtualMachines-Insights" target="_blank" rel="noopener"&gt;GitHub repository&lt;/A&gt;&lt;/STRONG&gt;.&lt;/P&gt;
&lt;img /&gt;
&lt;H2 style="font-size: large;"&gt;&lt;STRONG&gt;&lt;U&gt;Why do you need this Workbook?&lt;/U&gt;&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;When managing multiple VMs, understanding usage trends, comparing key metrics, and identifying areas for improvement can be time-consuming.&lt;/P&gt;
&lt;P&gt;The Azure Virtual Machine Insights Workbook simplifies this process by centralizing essential data into one place from multiple subscriptions and resource groups. It covers inventory to provide you with a clear overview of all your VM resources and platform metrics to help you monitor, analyze, compare, and optimize performance effectively.&lt;/P&gt;
&lt;H2 style="font-size: large;"&gt;&lt;STRONG&gt;&lt;U&gt;Scenarios to use this Workbook&lt;/U&gt;&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;Here are a few examples of how this workbook can bring value:&lt;/P&gt;
&lt;H3 style="font-size: large;"&gt;&lt;U&gt;Management&lt;/U&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Centralized Inventory Management&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;Easily view all your VMs in one place, ensuring a clear overview of your resources.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 style="font-size: large;"&gt;&lt;U&gt;Performance and Monitoring&lt;/U&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Performance monitoring&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;Analyze metrics like CPU, memory, network, and disk usage to identify performance bottlenecks and maintain optimal application performance.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Performance trends&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;Examine long-term performance trends to understand how your VMs behave over time and identify areas for improvement.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Comparing different VM types for the same workload&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;Compare the performance of various VM types running the same workload to determine the best configuration for your needs.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Virtual Machines behind a load balancer&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;Monitor and compare the performance of VMs behind a load-balanced to ensure even distribution and optimal resource utilization.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Virtual Machines farm&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;Assess and compare the performance of VMs within a server farm to identify outliers and maintain operational efficiency.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 style="font-size: large;"&gt;&lt;U&gt;Cost&lt;/U&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Cost Optimization&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;Detect and compare underutilized VMs or overprovisioned resources to reduce waste and save on costs.&lt;/LI&gt;
&lt;LI&gt;Analyse usage trends over time to determine if an hourly spend commitment through &lt;A style="font-style: var(--lia-blog-font-style); font-weight: var(--lia-blog-font-weight); font-family: var(--lia-blog-font-family); font-size: var(--lia-bs-font-size-base);" href="https://learn.microsoft.com/en-us/azure/cost-management-billing/savings-plan/savings-plan-compute-overview" target="_blank" rel="noopener"&gt;Azure savings plans&lt;/A&gt; is feasible&lt;SPAN style="font-style: var(--lia-blog-font-style); font-weight: var(--lia-blog-font-weight); font-family: var(--lia-blog-font-family); background-color: var(--lia-rte-bg-color); color: var(--lia-bs-body-color); font-size: var(--lia-bs-font-size-base);"&gt;.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;Understand the timeframes for automating the deallocation of non-production VMs, unless&amp;nbsp;&lt;A style="font-style: var(--lia-blog-font-style); font-weight: var(--lia-blog-font-weight); font-family: var(--lia-blog-font-family); font-size: var(--lia-bs-font-size-base);" href="https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/save-compute-costs-reservations" target="_blank" rel="noopener"&gt;Azure Reservations&lt;/A&gt; cover them&lt;SPAN style="font-style: var(--lia-blog-font-style); font-weight: var(--lia-blog-font-weight); font-family: var(--lia-blog-font-family); background-color: var(--lia-rte-bg-color); color: var(--lia-bs-body-color); font-size: var(--lia-bs-font-size-base);"&gt;.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 style="font-size: large;"&gt;&lt;U&gt;Independent software vendors (ISVs)&lt;/U&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;ISV managing VMs per customer&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;Compare performance across all customer VMs to identify trends and ensure consistent service delivery for each customer.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 style="font-size: large;"&gt;&lt;U&gt;Trends and Planning&lt;/U&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Resource Planning&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;Track usage trends over time to better predict future resource needs and ensure your VMs are prepared for business growth.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Scalability Planning&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;Utilize insights from trends and metrics to prepare for scaling your VMs during peak demand or business growth.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2 style="font-size: large;"&gt;&lt;U&gt;&lt;STRONG&gt;Examples from the workbook&lt;/STRONG&gt;&lt;/U&gt;&lt;/H2&gt;
&lt;img&gt;Overview&lt;/img&gt;&lt;img&gt;Monitor (Overview)&lt;/img&gt;&lt;img&gt;Monitor (Network)&lt;/img&gt;&lt;img&gt;Inventory&lt;/img&gt;
&lt;H2 style="font-size: large;"&gt;&lt;STRONG&gt;&lt;U&gt;Conclusion&lt;/U&gt;&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;The Azure Virtual Machine Insights Workbook helps you manage your VMs by bringing key metrics and insights together in one place, using native Azure features at no extra cost.&lt;/P&gt;
&lt;P&gt;It lets you analyze performance, cut costs, and plan for future growth. Whether you are investigating performance issues, analyzing underused resources, or predicting future needs, this workbook helps you make smart decisions and manage your infrastructure more efficiently.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For any queries or to contribute, feel free to connect via the GitHub repo or&amp;nbsp;&lt;A class="lia-external-url" href="https://github.com/dolevshor/Azure-VirtualMachines-Insights/issues" target="_blank" rel="noopener"&gt;submit feedback&lt;/A&gt;!&lt;/P&gt;</description>
      <pubDate>Mon, 27 Jan 2025 14:18:48 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-partners/azure-virtual-machine-centralized-insights-for-smarter/m-p/4366086#M227</guid>
      <dc:creator>Dolev_Shor</dc:creator>
      <dc:date>2025-01-27T14:18:48Z</dc:date>
    </item>
    <item>
      <title>Using Structured Outputs in Azure OpenAI’s GPT-4o for consistent document data processing</title>
      <link>https://techcommunity.microsoft.com/t5/azure-partners/using-structured-outputs-in-azure-openai-s-gpt-4o-for-consistent/m-p/4261737#M226</link>
      <description>&lt;img&gt;A stack of well-structured documents on a table&lt;/img&gt;
&lt;P&gt;When using language models for AI-driven document processing, ensuring reliability and consistency in data extraction is crucial for downstream processing.&lt;/P&gt;
&lt;P&gt;This article outlines how the Structured Outputs feature of GPT-4o offers the most reliable and cost-effective solution to this challenge.&lt;/P&gt;
&lt;P&gt;To jump into action and use Structured Outputs for document processing,&amp;nbsp;&lt;A title="Data Extraction - Azure OpenAI GPT-4o with Vision" href="https://github.com/Azure-Samples/azure-ai-document-processing-samples/blob/main/samples/extraction/vision-based/openai.ipynb" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;get hands on with our Python samples on GitHub&lt;/STRONG&gt;&lt;/A&gt;.&lt;/P&gt;
&lt;H2&gt;&lt;SPAN style="font-size: x-large;"&gt;Key challenges in consistency in generating structured outputs&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;ISVs and Startups building document data extraction solutions grapple with the complexities of ensuring that language models generate a consistent output inline with their defined schemas. These key challenges include:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Limitations in inline JSON output&lt;/STRONG&gt;. While some models introduced the ability to produce JSON outputs, inconsistencies still arise from them. Language models can generate a response that doesn’t conform to the provided schema. This requires additional prompt engineering or post-processing to resolve.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Complexity in prompts&lt;/STRONG&gt;. Including detailed inline JSON schemas within prompts increases the overall number of input tokens consumed. This is particularly problematic if you have a large, complex output structure.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&lt;SPAN style="font-size: x-large;"&gt;Benefits of using the Structured Outputs features in Azure OpenAI’s GPT-4o&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;To overcome the limitations and inconsistencies of inline JSON outputs, GPT-4o’s structured outputs enables the following capabilities:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Strict schema adherence.&lt;/STRONG&gt;&amp;nbsp;Structured Outputs dynamically constrains the model’s outputs to adhere to JSON schemas provided in the response format of the request to GPT-4o. This ensures that the response is always well-formed for downstream processing.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Reliability and consistency.&amp;nbsp;&lt;/STRONG&gt;Using additional libraries, such as Pydantic, combined with Structured Outputs, developers can define exactly how data should be constrained to a specific model. This minimizes any post-processing and improves data validation.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Cost optimization.&amp;nbsp;&lt;/STRONG&gt;Unlike inline JSON schemas, Structured Outputs do not count towards the total number of input tokens consumed in a request to GPT-4o. This provides more overall input tokens for consuming document data.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Let’s explore how to use Structured Outputs with document processing in more detail.&lt;/P&gt;
&lt;H2&gt;&lt;SPAN style="font-size: x-large;"&gt;Understanding Structured Outputs in document processing&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;!-- wp:paragraph --&gt;&lt;/P&gt;
&lt;P&gt;Introduced in September 2024, the &lt;STRONG&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/structured-outputs?tabs=python-secure" target="_blank" rel="noopener"&gt;Structured Outputs feature in Azure OpenAI’s GPT-4o model&lt;/A&gt;&lt;/STRONG&gt; provided much needed flexibility in requests to generate a consistent output using class models and JSON schemas.&lt;/P&gt;
&lt;P&gt;For document processing, this enables a more streamlined approach to both structured data extraction as well as document classifications. This is particularly useful when building document processing pipelines.&lt;/P&gt;
&lt;P&gt;&lt;!-- /wp:paragraph --&gt; &lt;!-- wp:paragraph --&gt;&lt;/P&gt;
&lt;P&gt;By utilizing a JSON schema format, GPT-4o constrains the generated output to a JSON structure that is consistent with every request. These JSON structures can then easily be deserialized into a model object that can be processed easily by other services or systems. This eliminates potential errors often caused by inline JSON structures being misinterpreted by language models.&lt;/P&gt;
&lt;H3&gt;Implementing consistent outputs using GPT-4o in Python&lt;/H3&gt;
&lt;P&gt;To take full advantage and simplify the schema generation with Python, Pydantic is the ideal supporting library to build out class models to define the desired structure for outputs. Pydantic offers built-in schema generation for producing the necessary JSON schema required for the request, as well as data validation.&lt;/P&gt;
&lt;P&gt;Below is an example for extracting data from an invoice demonstrating the capabilities of a complex class structure using Structured Outputs.&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;from typing import Optional
from pydantic import BaseModel


class InvoiceSignature(BaseModel):
    type: Optional[str]
    name: Optional[str]
    is_signed: Optional[bool]


class InvoiceProduct(BaseModel):
    id: Optional[str]
    description: Optional[str]
    unit_price: Optional[float]
    quantity: Optional[float]
    total: Optional[float]
    reason: Optional[str]


class Invoice(BaseModel):
    invoice_number: Optional[str]
    purchase_order_number: Optional[str]
    customer_name: Optional[str]
    customer_address: Optional[str]
    delivery_date: Optional[str]
    payable_by: Optional[str]
    products: Optional[list[InvoiceProduct]]
    returns: Optional[list[InvoiceProduct]]
    total_product_quantity: Optional[float]
    total_product_price: Optional[float]
    product_signatures: Optional[list[InvoiceSignature]]
    returns_signatures: Optional[list[InvoiceSignature]]&lt;/LI-CODE&gt;
&lt;P&gt;The JSON schema supported by the Structured Outputs feature requires that all properties be required. In this example, using the Optional shorthand notation will still ensure that the property adheres to the required nature of the JSON schema. However, it defines the type for the property as &lt;STRONG&gt;anyof&lt;/STRONG&gt; for both the expected type and null. This ensures that the model can generate a null value if the data can't be found in the document.&lt;/P&gt;
&lt;P&gt;With a well-defined model in place, requests to the Azure OpenAI chat completions endpoint are as simple as providing the model as the request’s response format. This is demonstrated below in a request to extract data from an invoice.&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;completion = openai_client.beta.chat.completions.parse(
    model="gpt-4o",
    messages=[
        {
            "role": "system",
            "content": "You are an AI assistant that extracts data from documents.",
        },
        {
            "role": "user",
            "content": f"""Extract the data from this invoice. 
            - If a value is not present, provide null.
            - Dates should be in the format YYYY-MM-DD.""",
        },
        {
            "role": "user",
            "content": document_markdown_content,
        }
    ],
    response_format=Invoice,
    max_tokens=4096,
    temperature=0.1,
    top_p=0.1
)&lt;/LI-CODE&gt;
&lt;H2&gt;&lt;SPAN style="font-size: x-large;"&gt;Best practices for utilizing Structured Outputs for document data processing&lt;/SPAN&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Schema/model design.&lt;/STRONG&gt; Use well defined names for nested objects and properties to make it easier for the GPT-4o model to interpret how to extract these key pieces of information from documents. Be specific in terminology to ensure the model determines the correct value for fields.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Utilize prompt engineering.&amp;nbsp;&lt;/STRONG&gt;Continue to use your input prompts to provide direct instruction to the model on how to work with the document provided. For example, include the definitions for domain jargon, acronyms, and synonyms that may exist in a document type.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Use libraries that generate JSON schemas.&lt;/STRONG&gt;&amp;nbsp;Libraries, such as Pydantic for Python, make it easier to focus on building out models and data validation without the complexities of understanding how to convert or build a JSON schema from scratch.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Combine with GPT-4o vision capabilities.&lt;/STRONG&gt;&amp;nbsp;Processing document pages as images in a request to GPT-4o using Structured Outputs can yield higher accuracy and cost-effectiveness when compared to processing document text alone.&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&lt;SPAN style="font-size: x-large;"&gt;Summary&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;Leveraging Structured Outputs in Azure OpenAI’s GPT-4o provides a necessary solution to ensure consistent and reliable outputs when processing documents. By enforcing adherence to JSON schemas, this feature minimizes the chances of errors, reduces post-processing needs, and optimizes token usage.&lt;/P&gt;
&lt;P&gt;The one key recommendation to take away from this guidance is:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Evaluate Structured Outputs for your use cases&lt;/STRONG&gt;. We have provided a collection of samples on GitHub to guide you through potential scenarios, including extraction and classifications. Modify these samples to the needs of your specific document types to evaluate the effectiveness of the techniques. &lt;STRONG&gt;&lt;A href="https://github.com/Azure-Samples/azure-ai-document-processing-samples/blob/main/samples/extraction/vision-based/openai.ipynb" target="_blank" rel="noopener"&gt;Get the samples on GitHub&lt;/A&gt;&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;By exploring this approach, you can further streamline your document processing workflows, enhancing developer productivity and satisfaction for end users.&lt;/P&gt;
&lt;H2&gt;&lt;SPAN style="font-size: x-large;"&gt;Read more on document processing with Azure AI&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;Thank you for taking the time to read this article. We are sharing our insights for ISVs and Startups that enable document processing in their AI-powered solutions, based on real-world challenges we encounter. We invite you to continue your learning through our additional insights in this series.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;DIV class="h1-like-title"&gt;&lt;STRONG&gt;&lt;A title="Optimizing Data Extraction Accuracy with Custom Models in Azure AI Document Intelligence" href="https://techcommunity.microsoft.com/t5/azure-for-isv-and-startups/optimizing-data-extraction-accuracy-with-custom-models-in-azure/ba-p/4095089" target="_self"&gt;Optimizing Data Extraction Accuracy with Custom Models in Azure AI Document Intelligence&lt;/A&gt;&lt;/STRONG&gt;&lt;/DIV&gt;
&lt;UL&gt;
&lt;LI&gt;Discover how to enhance data extraction accuracy with Azure AI Document Intelligence by tailoring models to your unique document structures.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="h1-like-title"&gt;&lt;STRONG&gt;&lt;A title="Using Azure AI Document Intelligence and Azure OpenAI to extract structured data from documents" href="https://techcommunity.microsoft.com/t5/azure-for-isv-and-startups/using-azure-ai-document-intelligence-and-azure-openai-to-extract/ba-p/4107746" target="_self"&gt;Using Azure AI Document Intelligence and Azure OpenAI to extract structured data from documents&lt;/A&gt;&lt;/STRONG&gt;&lt;/DIV&gt;
&lt;UL&gt;
&lt;LI&gt;Discover how Azure AI Document Intelligence and Azure OpenAI efficiently extract structured data from documents, streamlining document processing workflows for AI-powered solutions.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;A href="https://techcommunity.microsoft.com/t5/azure-for-isv-and-startups/evaluating-azure-ai-models-for-document-data-extraction/ba-p/4157719" target="_self"&gt;Evaluating the quality of AI document data extraction with small and large language models&lt;/A&gt;&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;Discover our evaluation of the effectiveness of AI models in quality document data extraction using small and large language models (SLMs and LLMs).&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&lt;SPAN style="font-size: x-large;"&gt;Further reading&lt;/SPAN&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/structured-outputs?tabs=python-secure" target="_blank" rel="noopener"&gt;How to use structured outputs with Azure OpenAI Service | Microsoft Learn&lt;/A&gt;
&lt;UL&gt;
&lt;LI&gt;Discover how the structured outputs feature works, including limitations with schema size and field types.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/advanced-prompt-engineering?pivots=programming-language-chat-completions" target="_blank" rel="noopener"&gt;Prompt engineering techniques with Azure OpenAI | Microsoft Learn&lt;/A&gt;
&lt;UL&gt;
&lt;LI&gt;Discover how to improve your prompting techniques with Azure OpenAI to maximize the accuracy of your document data extraction.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.pydantic.dev/latest/why/#json-schema" target="_blank" rel="noopener"&gt;Why use Pydantic | Pydantic Docs&lt;/A&gt;
&lt;UL&gt;
&lt;LI&gt;Discover more about why you should adopt Pydantic for using the structured outputs feature in Python application, including details on how the JSON Schema output works.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;!-- /wp:heading --&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 06 Nov 2024 09:18:09 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-partners/using-structured-outputs-in-azure-openai-s-gpt-4o-for-consistent/m-p/4261737#M226</guid>
      <dc:creator>james_croft</dc:creator>
      <dc:date>2024-11-06T09:18:09Z</dc:date>
    </item>
    <item>
      <title>Azure Snapshots: Simplify Management and monitoring</title>
      <link>https://techcommunity.microsoft.com/t5/azure-partners/azure-snapshots-simplify-management-and-monitoring/m-p/4255837#M224</link>
      <description>&lt;H2&gt;&lt;FONT size="5"&gt;Introduction&lt;/FONT&gt;&lt;/H2&gt;
&lt;P&gt;Managing snapshots in Azure can be challenging when you have large a number of them across different Virtual Machines and subscriptions. Over time, snapshots can accumulate, leading to increased storage costs and making it harder to identify which ones are still needed.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This was the trigger to build the &lt;STRONG&gt;Azure Snapshots Insights Workbook&lt;/STRONG&gt;, designed to simplify monitoring and management of Azure Snapshots.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;To get started, check out the&amp;nbsp;&lt;/SPAN&gt;&lt;A title="GitHub repository" href="https://github.com/dolevshor/Azure-Snapshots-Insights" target="_blank" rel="noopener noreferrer"&gt;GitHub repository&lt;/A&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;FONT size="5"&gt;Benefits of using this workbook&lt;/FONT&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Centralized Monitoring:&lt;/STRONG&gt; Manage all your snapshots in one place across multiple resource groups and subscriptions.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Cost Optimization:&lt;/STRONG&gt; Reduce storage expenses by identifying and deleting outdated snapshots.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Better Insights:&lt;/STRONG&gt; Gain a clearer understanding of your snapshot usage patterns by the inventory dashboard.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;FONT size="5"&gt;Key features&lt;/FONT&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Overview:&amp;nbsp;&lt;/STRONG&gt;Monitor important details like snapshot name, source disk, size (GiB), resource group, location, storage type, snapshot type (Full/Incremental), time created, public network access, etc.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Inventory:&amp;nbsp;&lt;/STRONG&gt;Monitor the snapshot usage count by Subscription ID, Resource Group, Location, Storage type, Source disk size (GiB), Snapshot type, Disk state, Create Option, API Version, Public Network Access, Access policy, Data Access Auth Mode.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Filtering:&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;Filter snapshots by specific subscription/s, resource group/s and specific resource/s.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Age-Based Filtering:&amp;nbsp;&lt;/STRONG&gt;View snapshots by age creation (&lt;EM&gt;1, 2, 3, 4, 5, 6 ,7, 14, 30, 60, 90 days ago&lt;/EM&gt;), making it easy to identify old ones.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Snapshot Deletion:&amp;nbsp;&lt;/STRONG&gt;Remove outdated or unnecessary snapshots with just a few clicks directly from the workbook.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;FONT size="4"&gt;Age-Based Filtering&lt;/FONT&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;H3&gt;&lt;FONT size="4"&gt;Inventory&lt;/FONT&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;H2&gt;&lt;FONT size="5"&gt;Best practices for managing Azure Snapshots&lt;/FONT&gt;&lt;U&gt;&lt;/U&gt;&lt;U&gt;&lt;/U&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;FONT size="4"&gt;Cost Optimization&lt;/FONT&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Regularly review snapshot usage&lt;/STRONG&gt;: Track the number and size of snapshots. This helps avoid unexpected costs and ensures you are not keeping unnecessary snapshots.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Monitor snapshot age with the Age-Based filtering: &lt;/STRONG&gt;Regularly review the age of your snapshots to identify and delete those that have surpassed your retention period to avoid unnecessary storage costs.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Implement a retention policy&lt;/STRONG&gt;: Define how long snapshots should be retained based on your organization data retention requirements. Delete older snapshots regularly to avoid accumulating unnecessary costs.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Use Incremental snapshots&lt;/STRONG&gt;: Whenever possible, use incremental snapshots, which only capture changes since the last snapshot. This approach is more cost-effective and efficient than full snapshots.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Monitor snapshot costs&lt;/STRONG&gt;: Regularly monitor the cost associated with snapshots and optimize them as needed.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Evaluate snapshot storage options&lt;/STRONG&gt;: Consider the type of storage account used for snapshots, especially when dealing with large volumes. Premium storage might be necessary for high-performance requirements, while standard storage can be more cost-effective for less critical data.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Optimize snapshot frequency&lt;/STRONG&gt;: Evaluate your snapshot frequency based on how often your data changes. For example, if your data changes infrequently, taking daily snapshots might not be necessary.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Clean up orphaned snapshots&lt;/STRONG&gt;: Identify and clean up orphaned snapshots that are no longer associated with any resources to optimize costs.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;FONT size="4"&gt;Security&lt;/FONT&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Secure your snapshots&lt;/STRONG&gt;: Use Azure Role-Based Access Control (RBAC) to restrict access to your snapshots. Ensure that only authorized users have permission to create, manage, or delete them.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Performance and Efficiency&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Store snapshots in the same region: &lt;/STRONG&gt;Ensure your snapshots are stored in the same region as the source disk to minimize latency and reduce costs when creating or restoring snapshots.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Automate snapshot management&lt;/STRONG&gt;: Automate the snapshot creation, deletion, and management processes.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;FONT size="4"&gt;Data Protection and Recovery&lt;/FONT&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Test snapshot recovery&lt;/STRONG&gt;: Periodically test the restoration process to ensure that your snapshots can be successfully used to recover data when needed.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Take snapshots before major changes&lt;/STRONG&gt;: Always create snapshots before performing significant updates or configurations on your Virtual Machines or managed disks, so you can quickly roll back if something goes wrong.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Use Azure Backup for Long-Term Storage&lt;/STRONG&gt;: If you need long-term retention, consider using Azure Backup instead of keeping snapshots indefinitely, as it provides more cost-effective and robust data retention.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;FONT size="4"&gt;Governance and Management&lt;/FONT&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Apply Tags for better organization&lt;/STRONG&gt;: Tag your snapshots with relevant information, such as &lt;EM&gt;Environment&lt;/EM&gt; (e.g., "production", "development"), &lt;EM&gt;Application&lt;/EM&gt;, or &lt;EM&gt;Owner&lt;/EM&gt;, to improve management and cost tracking.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Use Resource Locks&lt;/STRONG&gt;: Apply resource locks to prevent accidental deletion of critical snapshots. This is particularly useful for snapshots that serve as backups for essential data.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Check for dependencies&lt;/STRONG&gt;: Before deleting a snapshot, ensure it’s not linked to any dependent resources or processes that could be disrupted if the snapshot is removed.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;FONT size="5"&gt;Conclusion&lt;/FONT&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The Azure Snapshots Insights Workbook offers a streamlined approach to managing and optimizing your Azure snapshots, helping you stay organized and reduce unnecessary storage costs. By leveraging its centralized monitoring, age-based filtering, and deletion capabilities, you can maintain a more efficient cloud environment. Implementing these best practices will ensure your snapshots are always under control, ultimately enhancing your Azure resource management.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;To get started, check out the&amp;nbsp;&lt;/SPAN&gt;&lt;A title="GitHub repository" href="https://github.com/dolevshor/Azure-Snapshots-Insights" target="_blank" rel="noopener noreferrer"&gt;GitHub repository&lt;/A&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;For any queries or to contribute, feel free to connect via the repo or&amp;nbsp;&lt;/SPAN&gt;&lt;A title="submit feedback" href="https://github.com/dolevshor/Azure-Snapshots-Insights/issues" target="_blank" rel="noopener noreferrer"&gt;submit feedback&lt;/A&gt;&lt;SPAN&gt;!&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 30 Sep 2024 08:08:34 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-partners/azure-snapshots-simplify-management-and-monitoring/m-p/4255837#M224</guid>
      <dc:creator>Dolev_Shor</dc:creator>
      <dc:date>2024-09-30T08:08:34Z</dc:date>
    </item>
    <item>
      <title>Deploy Secure Azure AI Studio with a Managed Virtual Network</title>
      <link>https://techcommunity.microsoft.com/t5/azure-partners/deploy-secure-azure-ai-studio-with-a-managed-virtual-network/m-p/4251073#M221</link>
      <description>&lt;DIV class="markdown-heading" dir="auto"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;This article and the companion &lt;A href="https://github.com/Azure-Samples/azure-ai-studio-secure-bicep" target="_self"&gt;sample&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/A&gt;demonstrates how to set up an&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/ai-studio/what-is-ai-studio" target="_blank" rel="nofollow noopener"&gt;Azure AI Studio&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;environment with managed identity and Azure RBAC to connected&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/ai-services/what-are-ai-services" target="_blank" rel="nofollow noopener"&gt;Azure AI Services&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and dependent resources and with the managed virtual network isolation mode set to&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/ai-studio/how-to/configure-managed-network" target="_blank" rel="nofollow noopener"&gt;Allow Internet Outbound&lt;/A&gt;. For more information, see&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/ai-studio/how-to/configure-managed-network" target="_blank" rel="nofollow noopener"&gt;How to configure a managed network for Azure AI Studio hubs&lt;/A&gt;.&amp;nbsp;For more information, see:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL dir="auto"&gt;
&lt;LI&gt;&lt;A href="https://aka.ms/aistudio/docs" target="_blank" rel="nofollow noopener"&gt;Azure AI Studio Documentation&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H2 class="heading-element" dir="auto" tabindex="-1"&gt;Azure Resources&lt;/H2&gt;
&lt;A id="user-content-azure-resources" class="anchor" href="https://github.com/Azure-Samples/azure-ai-studio-secure-bicep/blob/main/bicep/managedvnet/README.md#azure-resources" target="_blank" rel="noopener" aria-label="Permalink: Azure Resources"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;You can use the Bicep templates in this &lt;A href="https://github.com/Azure-Samples/azure-ai-studio-secure-bicep" target="_self"&gt;GitHub repository&lt;/A&gt;&amp;nbsp;to deploy the following Azure resources:&lt;/P&gt;
&lt;DIV id="tinyMceEditorpaolosalvatori_2" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt; &lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE&gt;
&lt;THEAD&gt;
&lt;TR&gt;
&lt;TH&gt;Resource&lt;/TH&gt;
&lt;TH&gt;Type&lt;/TH&gt;
&lt;TH&gt;Description&lt;/TH&gt;
&lt;/TR&gt;
&lt;/THEAD&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD&gt;Azure Application Insights&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/templates/microsoft.insights/components?pivots=deployment-language-bicep" target="_blank" rel="nofollow noopener"&gt;Microsoft.Insights/components&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;An Azure Application Insights instance associated with the Azure AI Studio workspace&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Azure Monitor Log Analytics&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/templates/microsoft.operationalinsights/workspaces?pivots=deployment-language-bicep" target="_blank" rel="nofollow noopener"&gt;Microsoft.OperationalInsights/workspaces&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;An Azure Log Analytics workspace used to collect diagnostics logs and metrics from Azure resources&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Azure Key Vault&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/templates/microsoft.keyvault/vaults?pivots=deployment-language-bicep" target="_blank" rel="nofollow noopener"&gt;Microsoft.KeyVault/vaults&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;An Azure Key Vault instance associated with the Azure AI Studio workspace&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Azure Storage Account&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/templates/microsoft.storage/storageaccounts" target="_blank" rel="nofollow noopener"&gt;Microsoft.Storage/storageAccounts&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;An Azure Storage instance associated with the Azure AI Studio workspace&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Azure Container Registry&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/templates/microsoft.containerregistry/registries" target="_blank" rel="nofollow noopener"&gt;Microsoft.ContainerRegistry/registries&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;An Azure Container Registry instance associated with the Azure AI Studio workspace&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Azure AI Hub / Project&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/templates/microsoft.machinelearningservices/workspaces" target="_blank" rel="nofollow noopener"&gt;Microsoft.MachineLearningServices/workspaces&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;An Azure AI Studio Hub and Project (Azure ML Workspace of kind 'hub' and 'project')&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Azure AI Services&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/templates/microsoft.cognitiveservices/accounts" target="_blank" rel="nofollow noopener"&gt;Microsoft.CognitiveServices/accounts&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;An Azure AI Services as the model-as-a-service endpoint provider including GPT-4o and ADA Text Embeddings model deployments&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Azure Virtual Network&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/templates/microsoft.network/virtualnetworks?pivots=deployment-language-bicep" target="_blank" rel="nofollow noopener"&gt;Microsoft.Network/virtualNetworks&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;A bring-your-own (BYO) virtual network hosting a jumpbox virtual machine to manage Azure AI Studio&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Azure Bastion Host&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/templates/microsoft.network/bastionhosts" target="_blank" rel="nofollow noopener"&gt;Microsoft.Network/virtualNetworks&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;A Bastion Host defined in the BYO virtual network that provides RDP connectivity to the jumpbox virtual machine&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Azure NAT Gateway&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/virtual-network/nat-gateway/nat-overview" target="_blank" rel="nofollow noopener"&gt;Microsoft.Network/natGateways&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;An Azure NAT Gateway that provides outbound connectivity to the jumpbox virtual machine&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Azure Private Endpoints&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/templates/microsoft.network/privateendpoints" target="_blank" rel="nofollow noopener"&gt;Microsoft.Network/privateEndpoints&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;Azure Private Endpoints defined in the BYO virtual network for Azure Container Registry, Azure Key Vault, Azure Storage Account, and Azure AI Hub Workspace&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Azure Private DNS Zones&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/templates/microsoft.network/privatednszones" target="_blank" rel="nofollow noopener"&gt;Microsoft.Network/privateDnsZones&lt;/A&gt;&lt;/TD&gt;
&lt;TD&gt;Azure Private DNS Zones are used for the DNS resolution of the Azure Private Endpoints&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;DIV class="markdown-alert markdown-alert-note" dir="auto"&gt;
&lt;P class="markdown-alert-title"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can select a different version of the GPT model by specifying the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;openAiDeployments&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;parameter in the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/Azure-Samples/azure-ai-studio-secure-bicep/blob/main/bicep/managedvnet/main.bicepparam" target="_blank" rel="noopener"&gt;main.bicepparam&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;parameters file. For details on the models available in various Azure regions, please refer to the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models" target="_blank" rel="nofollow noopener"&gt;Azure OpenAI Service models&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;documentation.&lt;/P&gt;
&lt;/DIV&gt;
&lt;DIV class="markdown-alert markdown-alert-note" dir="auto"&gt;
&lt;P&gt;The default deployment includes an Azure Container Registry resource. However, if you wish not to deploy an Azure Container Registry, you can simply set the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;acrEnabled&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;parameter to&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;false&lt;/CODE&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H2 class="heading-element" dir="auto" tabindex="-1"&gt;Network isolation architecture and isolation modes&lt;/H2&gt;
&lt;A id="user-content-network-isolation-architecture-and-isolation-modes" class="anchor" href="https://github.com/Azure-Samples/azure-ai-studio-secure-bicep/blob/main/bicep/managedvnet/README.md#network-isolation-architecture-and-isolation-modes" target="_blank" rel="noopener" aria-label="Permalink: Network isolation architecture and isolation modes"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;When you enable managed virtual network isolation, a managed virtual network is created for the hub workspace. Any managed compute resources you create for the hub, for example the virtual machines of online endpoint managed deployment, will automatically use this managed virtual network. The managed virtual network can also utilize&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/private-link/private-endpoint-overview" target="_blank" rel="nofollow noopener"&gt;Azure Private Endpoints&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;for Azure resources that your hub depends on, such as Azure Storage, Azure Key Vault, and Azure Container Registry.&amp;nbsp;There are three different configuration modes for outbound traffic from the managed virtual network:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE&gt;
&lt;THEAD&gt;
&lt;TR&gt;
&lt;TH&gt;Outbound mode&lt;/TH&gt;
&lt;TH&gt;Description&lt;/TH&gt;
&lt;TH&gt;Scenarios&lt;/TH&gt;
&lt;/TR&gt;
&lt;/THEAD&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD&gt;Allow internet outbound&lt;/TD&gt;
&lt;TD&gt;Allow all internet outbound traffic from the managed virtual network.&lt;/TD&gt;
&lt;TD&gt;You want unrestricted access to machine learning resources on the internet, such as python packages or pretrained models.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Allow only approved outbound&lt;/TD&gt;
&lt;TD&gt;Outbound traffic is allowed by specifying service tags.&lt;/TD&gt;
&lt;TD&gt;You want to minimize the risk of data exfiltration, but you need to prepare all required machine learning artifacts in your private environment.&lt;BR /&gt;* You want to configure outbound access to an approved list of services, service tags, or FQDNs.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Disabled&lt;/TD&gt;
&lt;TD&gt;Inbound and outbound traffic isn't restricted.&lt;/TD&gt;
&lt;TD&gt;You want public inbound and outbound from the hub.&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The Bicep&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;templates in the companion sample demonstrate how to deploy an Azure AI Studio environment with the hub workspace's managed network isolation mode configured to&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Allow Internet Outbound&lt;/CODE&gt;.&lt;/P&gt;
&lt;P&gt;The&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/private-link/private-endpoint-overview" target="_blank" rel="nofollow noopener"&gt;Azure Private Endpoints&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/templates/microsoft.network/privatednszones" target="_blank" rel="nofollow noopener"&gt;Private DNS Zones&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;in the hub workspace managed virtual network are automatically created for you, while the Bicep templates create the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/private-link/private-endpoint-overview" target="_blank" rel="nofollow noopener"&gt;Azure Private Endpoints&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and relative&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/templates/microsoft.network/privatednszones" target="_blank" rel="nofollow noopener"&gt;Private DNS Zones&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;in the client virtual network.&lt;/P&gt;
&lt;DIV class="markdown-alert markdown-alert-note" dir="auto"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H2 class="heading-element" dir="auto" tabindex="-1"&gt;Managed Virtual Network&lt;/H2&gt;
&lt;A id="user-content-managed-virtual-network" class="anchor" href="https://github.com/Azure-Samples/azure-ai-studio-secure-bicep/blob/main/bicep/managedvnet/README.md#managed-virtual-network" target="_blank" rel="noopener" aria-label="Permalink: Managed Virtual Network"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;When you provision the hub workspace of your Azure AI Studio with an isolation mode equal to the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/ai-studio/how-to/configure-managed-network?tabs=portal#configure-a-managed-virtual-network-to-allow-internet-outbound" target="_blank" rel="nofollow noopener"&gt;Allow Internet Outbound&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;isolation mode, the managed virtual network and the Azure Private Endpoints to the dependent resources will not be created if public network access of Azure Key Vault, Azure Container Registry, and Azure Storage Account dependent resources is enabled.&lt;BR /&gt;&lt;BR /&gt;The creation of the managed virtual network is deferred until a compute resource is created or provisioning is manually started. When allowing automatic creation, it can take around 30 minutes to create the first compute resource as it is also provisioning the network. For more information, see&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/how-to-managed-network?view=azureml-api-2#manually-provision-a-managed-vnet" target="_blank" rel="nofollow noopener"&gt;Manually provision workspace managed VNet&lt;/A&gt;.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;If you initially create Azure Key Vault, Azure Container Registry, and Azure Storage Account dependent resources with public network enabled and then decide to disable it later, the managed virtual network will not be automatically provisioned if it is not already provisioned, and the private endpoints to the dependent resources will not be created.&lt;/P&gt;
&lt;P&gt;In this case, if you want o create the private endpoints to the dependent resources, you need to reprovision the hub manage virtual network in one of the following ways:&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;UL dir="auto"&gt;
&lt;LI&gt;Redeploy the hub workspace using Bicep or Terraform templates. If the isolation mode is set to&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/ai-studio/how-to/configure-managed-network?tabs=portal#configure-a-managed-virtual-network-to-allow-internet-outbound" target="_blank" rel="nofollow noopener"&gt;Allow Internet Outbound&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and the dependent resources referenced by the hub workspace have public network access disabled, this operation will trigger the creation of the managed virtual network, if it does not already exist, and the private endpoints to the dependent resources.&lt;/LI&gt;
&lt;LI&gt;Execute the following Azure CLI command&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/cli/azure/ml/workspace?view=azure-cli-latest#az-ml-workspace-provision-network" target="_blank" rel="nofollow noopener"&gt;az ml workspace provision-network&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;to reprovision the managed virtual network. The private endpoints will be created with the managed virtual network if the public network access of the dependent resources is disabled.&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="highlight highlight-source-shell notranslate position-relative overflow-auto" dir="auto"&gt;
&lt;DIV class="zeroclipboard-container"&gt;&lt;LI-CODE lang="bash"&gt;az ml workspace provision-network --name my_hub_workspace_name --resource-group&lt;/LI-CODE&gt;&lt;/DIV&gt;
&lt;DIV class="zeroclipboard-container"&gt;&lt;SPAN&gt;At this time, it's not possible to directly access the managed virtual network via the Azure CLI or the Azure Portal. You can see the managed virtual network indirectly by looking at the private endpoints, if any, under the hub workspace. You can proceed as follows:&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL dir="auto"&gt;
&lt;LI&gt;Go to the Azure Portal and select your Azure AI hub.&lt;/LI&gt;
&lt;LI&gt;Click on&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Settings&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and then&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Networking&lt;/CODE&gt;.&lt;/LI&gt;
&lt;LI&gt;Open the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Workspace managed outbound access&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;tab.&lt;/LI&gt;
&lt;LI&gt;Expand the section titled&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Required outbound rules&lt;/CODE&gt;.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here, you will find the private endpoints that are connected to the resources within the hub managed virtual network. Ensure that these private endpoints are active.&lt;/P&gt;
&lt;DIV id="tinyMceEditorpaolosalvatori_3" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;You can also see the private endpoints hosted by the manage virtual network of your hub workspace inside the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Networking&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;settings of individual dependent resources, for example Key Vault:&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;OL dir="auto"&gt;
&lt;LI&gt;Go to the Azure Portal and select your Azure Key Vault.&lt;/LI&gt;
&lt;LI&gt;Click on&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Settings&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and then&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Networking&lt;/CODE&gt;.&lt;/LI&gt;
&lt;LI&gt;Open the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Private endpoint connections&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;tab.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;BR /&gt;Here, you will find the private endpoint created by the Bicep templates in the client virtual network along with the private endpoint created in the hub managed virtual network of the hub.&lt;/P&gt;
&lt;DIV id="tinyMceEditorpaolosalvatori_4" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;Also note that when you create a hub workspace with the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/ai-studio/how-to/configure-managed-network?tabs=portal#configure-a-managed-virtual-network-to-allow-internet-outbound" target="_blank" rel="nofollow noopener"&gt;Allow Internet Outbound&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;isolation mode, the creation of the managed network is not immediate to save costs. The managed virtual network needs to be manually triggered via the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/cli/azure/ml/workspace?view=azure-cli-latest#az-ml-workspace-provision-network" target="_blank" rel="nofollow noopener"&gt;az ml workspace provision-network&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;command, or it will be triggered when you create a compute resource or private endpoints to dependent resources.&lt;/P&gt;
&lt;P&gt;At this time, the creation of an online endpoint does not automatically trigger the creation of a managed virtual network. An error occurs if you try to create an online deployment under the workspace which enabled workspace managed VNet but the managed VNet is not provisioned yet. Workspace managed VNet should be provisioned before you create an online deployment. Follow instructions to manually provision the workspace managed VNet. Once completed, you may start creating online deployments. For more information, see Network isolation with managed online endpoint and Secure your managed online endpoints with network isolation.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H2 class="heading-element" dir="auto" tabindex="-1"&gt;Limitations&lt;/H2&gt;
&lt;A id="user-content-limitations" class="anchor" href="https://github.com/Azure-Samples/azure-ai-studio-secure-bicep/blob/main/bicep/managedvnet/README.md#limitations" target="_blank" rel="noopener" aria-label="Permalink: Limitations"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;The current limitations of managed virtual network are:&lt;/P&gt;
&lt;UL dir="auto"&gt;
&lt;LI&gt;Azure AI Studio currently doesn't support bringing your own virtual network, it only supports managed virtual network isolation.&lt;/LI&gt;
&lt;LI&gt;Once you enable managed virtual network isolation of your Azure AI, you can't disable it.&lt;/LI&gt;
&lt;LI&gt;Managed virtual network uses private endpoint connections to access your private resources. You can't have a private endpoint and a service endpoint at the same time for your Azure resources, such as a storage account. We recommend using private endpoints in all scenarios.&lt;/LI&gt;
&lt;LI&gt;The managed virtual network is deleted when the Azure AI is deleted.&lt;/LI&gt;
&lt;LI&gt;Data exfiltration protection is automatically enabled for the only approved outbound mode. If you add other outbound rules, such as to FQDNs, Microsoft can't guarantee that you're protected from data exfiltration to those outbound destinations.&lt;/LI&gt;
&lt;LI&gt;Using FQDN outbound rules increases the cost of the managed virtual network because FQDN rules use Azure Firewall. For more information, see&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/ai-studio/how-to/configure-managed-network?tabs=portal#pricing" target="_blank" rel="nofollow noopener"&gt;Pricing&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;FQDN outbound rules only support ports 80 and 443.&lt;/LI&gt;
&lt;LI&gt;When using a compute instance with a managed network, use the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;az ml compute connect-ssh&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;command to connect to the compute using SSH.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H2 class="heading-element" dir="auto" tabindex="-1"&gt;Pricing&lt;/H2&gt;
&lt;A id="user-content-pricing" class="anchor" href="https://github.com/Azure-Samples/azure-ai-studio-secure-bicep/blob/main/bicep/managedvnet/README.md#pricing" target="_blank" rel="noopener" aria-label="Permalink: Pricing"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;According to the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/ai-studio/how-to/configure-managed-network?tabs=portal#pricing" target="_blank" rel="nofollow noopener"&gt;documentation&lt;/A&gt;, the hub managed virtual network feature is free. However, you will be charged for the following resources used by the managed virtual network:&lt;/P&gt;
&lt;UL dir="auto"&gt;
&lt;LI&gt;Azure Private Link - Private endpoints used to secure communications between the managed virtual network and Azure resources rely on Azure Private Link. For more information on pricing, see&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/pricing/details/private-link/" target="_blank" rel="nofollow noopener"&gt;Azure Private Link pricing&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;FQDN outbound rules - FQDN outbound rules are implemented using Azure Firewall. If you use outbound FQDN rules, charges for Azure Firewall are included in your billing. Azure Firewall SKU is standard. Azure Firewall is provisioned per hub.&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="markdown-alert markdown-alert-note" dir="auto"&gt;
&lt;P class="markdown-alert-title"&gt;&lt;STRONG&gt;NOTE&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The firewall isn't created until you add an outbound FQDN rule. If you don't use FQDN rules, you will not be charged for Azure Firewall. For more information on pricing, see&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/pricing/details/azure-firewall/" target="_blank" rel="nofollow noopener"&gt;Azure Firewall pricing&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H2 class="heading-element" dir="auto" tabindex="-1"&gt;Secure Access to the Jumpbox Virtual Machine&lt;/H2&gt;
&lt;A id="user-content-secure-access-to-the-jumpbox-virtual-machine" class="anchor" href="https://github.com/Azure-Samples/azure-ai-studio-secure-bicep/blob/main/bicep/managedvnet/README.md#secure-access-to-the-jumpbox-virtual-machine" target="_blank" rel="noopener" aria-label="Permalink: Secure Access to the Jumpbox Virtual Machine"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;The jumpbox virtual machine is deployed with Windows 11 operating system and the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Microsoft.Azure.ActiveDirectory&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;VM extension, a specialized extension for integrating Azure virtual machines (VMs) with Microsoft Entra ID. This integration provides several key benefits, particularly in enhancing security and simplifying access management. Here's an overview of what the Microsoft.Azure.ActiveDirectory VM extension offers:&lt;/P&gt;
&lt;P&gt;&lt;CODE&gt;Microsoft.Azure.ActiveDirectory&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;VM extension is specialized for integrating Azure virtual machines (VMs) with Microsoft Entra ID. This integration provides several key benefits, particularly in enhancing security and simplifying access management. Here's an overview of the features and benefits of this VM extension:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL dir="auto"&gt;
&lt;LI&gt;Enables users to sign in to a Windows or Linux virtual machine using their Microsoft Entra ID credentials.&lt;/LI&gt;
&lt;LI&gt;Facilitates single sign-on (SSO) experiences, reducing the need for managing separate local VM accounts.&lt;/LI&gt;
&lt;LI&gt;Supports multi-factor authentication, increasing security by requiring additional verification steps during login.&lt;/LI&gt;
&lt;LI&gt;Integrates with Azure RBAC, allowing administrators to assign specific roles to users, thereby controlling the level of access and permissions on the virtual machine.&lt;/LI&gt;
&lt;LI&gt;Allows administrators to apply conditional access policies to the VM, enhancing security by enforcing controls such as trusted device requirements, location-based access, and more.&lt;/LI&gt;
&lt;LI&gt;Eliminates the need to manage local administrator accounts, simplifying VM management and reducing overhead.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For more information, see&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/entra/identity/devices/howto-vm-sign-in-azure-ad-windows" target="_blank" rel="nofollow noopener"&gt;Sign in to a Windows virtual machine in Azure by using Microsoft Entra ID including passwordless&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;Make sure to enforce multi-factor authentication on your user account in your Microsoft Entra ID Tenant, as shown in the following screenshot:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Then, specify at least an authentication method in addition to the password for the user account, for example the phone number, as shown in the following screenshot:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To log in to the jumpbox virtual machine using a Microsoft Entra ID tenant user, you need to assign one of the following Azure roles to determine who can access the VM. To assign these roles, you must have the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles#virtual-machine-data-access-administrator-preview" target="_blank" rel="nofollow noopener"&gt;Virtual Machine Data Access Administrator&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;role, or any role that includes the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Microsoft.Authorization/roleAssignments/write&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;action, such as the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles#role-based-access-control-administrator-preview" target="_blank" rel="nofollow noopener"&gt;Role Based Access Control Administrator&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;role. If you choose a role other than the Virtual Machine Data Access Administrator, it is recommended to&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/role-based-access-control/delegate-role-assignments-overview" target="_blank" rel="nofollow noopener"&gt;add a condition to limit the permission to create role assignments&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL dir="auto"&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles/compute#virtual-machine-administrator-login" target="_blank" rel="nofollow noopener"&gt;Virtual Machine Administrator Login&lt;/A&gt;: Users who have this role assigned can sign in to an Azure virtual machine with administrator privileges.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles/compute#virtual-machine-user-login" target="_blank" rel="nofollow noopener"&gt;Virtual Machine User Login&lt;/A&gt;: Users who have this role assigned can sign in to an Azure virtual machine with regular user privileges.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To allow a user to sign in to the jumpbox virtual machine over RDP, you must assign the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Virtual Machine Administrator Login&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;or&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Virtual Machine User Login&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;role to the user at the subscription, resource group, or virtual machine level. The&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;virtualMachine.bicep&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;module assigns the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Virtual Machine Administrator Login&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;to the user identified by the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;userObjectId&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;parameter.&lt;/P&gt;
&lt;P&gt;To log in to the jumpbox virtual machine via Azure Bastion Host using a Microsoft Entra ID tenant user with multi-factor authentication, you can use the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/cli/azure/network/bastion?view=azure-cli-latest#az-network-bastion-rdp" target="_blank" rel="nofollow noopener"&gt;az network bastion rdp&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;command as follows:&lt;/P&gt;
&lt;DIV class="highlight highlight-source-shell notranslate position-relative overflow-auto" dir="auto"&gt;
&lt;DIV class="zeroclipboard-container"&gt;&lt;BR /&gt;&lt;LI-CODE lang="bash"&gt;az network bastion rdp \
    --name &amp;lt;bastion-host-name&amp;gt; \
    --resource-group &amp;lt;resource-group-name&amp;gt; \
    --target-resource-id &amp;lt;virtual-machine-resource-id&amp;gt; \
    --auth-type AAD​&lt;/LI-CODE&gt;&lt;BR /&gt;&lt;SPAN&gt;After logging in to the virtual machine, if you open the Edge browser and navigate to the Azure Portal or Azure AI Studio, the browser profile will automatically be configured to the tenant user account used for the VM login.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H2 class="heading-element" dir="auto" tabindex="-1"&gt;Bicep Parameters&lt;/H2&gt;
&lt;A id="user-content-bicep-parameters" class="anchor" href="https://github.com/Azure-Samples/azure-ai-studio-secure-bicep/blob/main/bicep/managedvnet/README.md#bicep-parameters" target="_blank" rel="noopener" aria-label="Permalink: Bicep Parameters"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;Specify a value for the required parameters in the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/Azure-Samples/azure-ai-studio-secure-bicep/blob/main/bicep/managedvnet/main.bicepparam" target="_blank" rel="noopener"&gt;main.bicepparam&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;parameters file before deploying the Bicep modules. Here is the markdown table extrapolating the name, type, and description of the parameters from the provided Bicep code:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE&gt;
&lt;THEAD&gt;
&lt;TR&gt;
&lt;TH&gt;Name&lt;/TH&gt;
&lt;TH&gt;Type&lt;/TH&gt;
&lt;TH&gt;Description&lt;/TH&gt;
&lt;/TR&gt;
&lt;/THEAD&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD&gt;prefix&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the name prefix for all the Azure resources.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;suffix&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the name suffix for all the Azure resources.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;location&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the location for all the Azure resources.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;hubName&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the name Azure AI Hub workspace.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;hubFriendlyName&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the friendly name of the Azure AI Hub workspace.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;hubDescription&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the description for the Azure AI Hub workspace displayed in Azure AI Studio.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;hubIsolationMode&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the isolation mode for the managed network of the Azure AI Hub workspace.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;hubPublicNetworkAccess&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the public network access for the Azure AI Hub workspace.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;connectionAuthType&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the authentication method for the OpenAI Service connection.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;systemDatastoresAuthMode&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Determines whether to use credentials for the system datastores of the workspace workspaceblobstore and workspacefilestore.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;projectName&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the name for the Azure AI Studio Hub Project workspace.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;projectFriendlyName&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the friendly name for the Azure AI Studio Hub Project workspace.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;projectPublicNetworkAccess&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the public network access for the Azure AI Project workspace.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;logAnalyticsName&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the name of the Azure Log Analytics resource.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;logAnalyticsSku&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the service tier of the workspace: Free, Standalone, PerNode, Per-GB.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;logAnalyticsRetentionInDays&lt;/TD&gt;
&lt;TD&gt;int&lt;/TD&gt;
&lt;TD&gt;Specifies the workspace data retention in days.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;applicationInsightsName&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the name of the Azure Application Insights resource.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;aiServicesName&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the name of the Azure AI Services resource.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;aiServicesSku&lt;/TD&gt;
&lt;TD&gt;object&lt;/TD&gt;
&lt;TD&gt;Specifies the resource model definition representing SKU.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;aiServicesIdentity&lt;/TD&gt;
&lt;TD&gt;object&lt;/TD&gt;
&lt;TD&gt;Specifies the identity of the Azure AI Services resource.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;aiServicesCustomSubDomainName&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies an optional subdomain name used for token-based authentication.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;aiServicesDisableLocalAuth&lt;/TD&gt;
&lt;TD&gt;bool&lt;/TD&gt;
&lt;TD&gt;Specifies whether to disable the local authentication via API key.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;aiServicesPublicNetworkAccess&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies whether or not public endpoint access is allowed for this account.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;openAiDeployments&lt;/TD&gt;
&lt;TD&gt;array&lt;/TD&gt;
&lt;TD&gt;Specifies the OpenAI deployments to create.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;keyVaultName&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the name of the Azure Key Vault resource.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;keyVaultNetworkAclsDefaultAction&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the default action of allow or deny when no other rules match for the Azure Key Vault resource.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;keyVaultEnabledForDeployment&lt;/TD&gt;
&lt;TD&gt;bool&lt;/TD&gt;
&lt;TD&gt;Specifies whether the Azure Key Vault resource is enabled for deployments.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;keyVaultEnabledForDiskEncryption&lt;/TD&gt;
&lt;TD&gt;bool&lt;/TD&gt;
&lt;TD&gt;Specifies whether the Azure Key Vault resource is enabled for disk encryption.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;keyVaultEnabledForTemplateDeployment&lt;/TD&gt;
&lt;TD&gt;bool&lt;/TD&gt;
&lt;TD&gt;Specifies whether the Azure Key Vault resource is enabled for template deployment.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;keyVaultEnableSoftDelete&lt;/TD&gt;
&lt;TD&gt;bool&lt;/TD&gt;
&lt;TD&gt;Specifies whether soft delete is enabled for this Azure Key Vault resource.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;keyVaultEnablePurgeProtection&lt;/TD&gt;
&lt;TD&gt;bool&lt;/TD&gt;
&lt;TD&gt;Specifies whether purge protection is enabled for this Azure Key Vault resource.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;keyVaultEnableRbacAuthorization&lt;/TD&gt;
&lt;TD&gt;bool&lt;/TD&gt;
&lt;TD&gt;Specifies whether to enable the RBAC authorization for the Azure Key Vault resource.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;keyVaultSoftDeleteRetentionInDays&lt;/TD&gt;
&lt;TD&gt;int&lt;/TD&gt;
&lt;TD&gt;Specifies the soft delete retention in days.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;acrEnabled&lt;/TD&gt;
&lt;TD&gt;bool&lt;/TD&gt;
&lt;TD&gt;Specifies whether to create the Azure Container Registry.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;acrName&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the name of the Azure Container Registry resource.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;acrAdminUserEnabled&lt;/TD&gt;
&lt;TD&gt;bool&lt;/TD&gt;
&lt;TD&gt;Enable admin user that have push/pull permission to the registry.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;acrPublicNetworkAccess&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies whether to allow public network access. Defaults to Enabled.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;acrSku&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the tier of your Azure Container Registry.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;acrAnonymousPullEnabled&lt;/TD&gt;
&lt;TD&gt;bool&lt;/TD&gt;
&lt;TD&gt;Specifies whether or not registry-wide pull is enabled from unauthenticated clients.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;acrDataEndpointEnabled&lt;/TD&gt;
&lt;TD&gt;bool&lt;/TD&gt;
&lt;TD&gt;Specifies whether or not a single data endpoint is enabled per region for serving data.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;acrNetworkRuleSet&lt;/TD&gt;
&lt;TD&gt;object&lt;/TD&gt;
&lt;TD&gt;Specifies the network rule set for the container registry.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;acrNetworkRuleBypassOptions&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies whether to allow trusted Azure services to access a network-restricted registry.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;acrZoneRedundancy&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies whether or not zone redundancy is enabled for this container registry.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;storageAccountName&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the name of the Azure Storage Account resource.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;storageAccountAccessTier&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the access tier of the Azure Storage Account resource. The default value is Hot.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;storageAccountAllowBlobPublicAccess&lt;/TD&gt;
&lt;TD&gt;bool&lt;/TD&gt;
&lt;TD&gt;Specifies whether the Azure Storage Account resource allows public access to blobs. The default value is false.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;storageAccountAllowSharedKeyAccess&lt;/TD&gt;
&lt;TD&gt;bool&lt;/TD&gt;
&lt;TD&gt;Specifies whether the Azure Storage Account resource allows shared key access. The default value is true.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;storageAccountAllowCrossTenantReplication&lt;/TD&gt;
&lt;TD&gt;bool&lt;/TD&gt;
&lt;TD&gt;Specifies whether the Azure Storage Account resource allows cross-tenant replication. The default value is false.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;storageAccountMinimumTlsVersion&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the minimum TLS version to be permitted on requests to the Azure Storage account. The default value is TLS1_2.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;storageAccountANetworkAclsDefaultAction&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;The default action of allow or deny when no other rules match.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;storageAccountSupportsHttpsTrafficOnly&lt;/TD&gt;
&lt;TD&gt;bool&lt;/TD&gt;
&lt;TD&gt;Specifies whether the Azure Storage Account resource should only support HTTPS traffic.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;virtualNetworkResourceGroupName&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the name of the resource group hosting the virtual network and private endpoints.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;virtualNetworkName&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the name of the virtual network.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;virtualNetworkAddressPrefixes&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the address prefixes of the virtual network.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;vmSubnetName&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the name of the subnet which contains the virtual machine.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;vmSubnetAddressPrefix&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the address prefix of the subnet which contains the virtual machine.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;vmSubnetNsgName&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the name of the network security group associated with the subnet hosting the virtual machine.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;bastionSubnetAddressPrefix&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the Bastion subnet IP prefix. This prefix must be within the virtual network IP prefix address space.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;bastionSubnetNsgName&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the name of the network security group associated with the subnet hosting Azure Bastion.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;bastionHostEnabled&lt;/TD&gt;
&lt;TD&gt;bool&lt;/TD&gt;
&lt;TD&gt;Specifies whether Azure Bastion should be created.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;bastionHostName&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the name of the Azure Bastion resource.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;bastionHostDisableCopyPaste&lt;/TD&gt;
&lt;TD&gt;bool&lt;/TD&gt;
&lt;TD&gt;Enable/Disable Copy/Paste feature of the Bastion Host resource.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;bastionHostEnableFileCopy&lt;/TD&gt;
&lt;TD&gt;bool&lt;/TD&gt;
&lt;TD&gt;Enable/Disable File Copy feature of the Bastion Host resource.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;bastionHostEnableIpConnect&lt;/TD&gt;
&lt;TD&gt;bool&lt;/TD&gt;
&lt;TD&gt;Enable/Disable IP Connect feature of the Bastion Host resource.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;bastionHostEnableShareableLink&lt;/TD&gt;
&lt;TD&gt;bool&lt;/TD&gt;
&lt;TD&gt;Enable/Disable Shareable Link of the Bastion Host resource.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;bastionHostEnableTunneling&lt;/TD&gt;
&lt;TD&gt;bool&lt;/TD&gt;
&lt;TD&gt;Enable/Disable Tunneling feature of the Bastion Host resource.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;bastionPublicIpAddressName&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the name of the Azure Public IP Address used by the Azure Bastion Host.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;bastionHostSkuName&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the name of the Azure Bastion Host SKU.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;natGatewayName&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the name of the Azure NAT Gateway.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;natGatewayZones&lt;/TD&gt;
&lt;TD&gt;array&lt;/TD&gt;
&lt;TD&gt;Specifies a list of availability zones denoting the zone in which the NAT Gateway should be deployed.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;natGatewayPublicIps&lt;/TD&gt;
&lt;TD&gt;int&lt;/TD&gt;
&lt;TD&gt;Specifies the number of Public IPs to create for the Azure NAT Gateway.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;natGatewayIdleTimeoutMins&lt;/TD&gt;
&lt;TD&gt;int&lt;/TD&gt;
&lt;TD&gt;Specifies the idle timeout in minutes for the Azure NAT Gateway.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;blobStorageAccountPrivateEndpointName&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the name of the private link to the blob storage account.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;fileStorageAccountPrivateEndpointName&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the name of the private link to the file storage account.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;keyVaultPrivateEndpointName&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the name of the private link to the Key Vault.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;acrPrivateEndpointName&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the name of the private link to the Azure Container Registry.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;hubWorkspacePrivateEndpointName&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the name of the private link to the Azure Hub Workspace.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;vmName&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the name of the virtual machine.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;vmSize&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the size of the virtual machine.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;imagePublisher&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the image publisher of the disk image used to create the virtual machine.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;imageOffer&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the offer of the platform image or marketplace image used to create the virtual machine.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;imageSku&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the image version for the virtual machine.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;authenticationType&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the type of authentication when accessing the virtual machine. SSH key is recommended.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;vmAdminUsername&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the name of the administrator account of the virtual machine.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;vmAdminPasswordOrKey&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the SSH Key or password for the virtual machine. SSH key is recommended.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;diskStorageAccountType&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the storage account type for OS and data disk.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;numDataDisks&lt;/TD&gt;
&lt;TD&gt;int&lt;/TD&gt;
&lt;TD&gt;Specifies the number of data disks of the virtual machine.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;osDiskSize&lt;/TD&gt;
&lt;TD&gt;int&lt;/TD&gt;
&lt;TD&gt;Specifies the size in GB of the OS disk of the VM.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;dataDiskSize&lt;/TD&gt;
&lt;TD&gt;int&lt;/TD&gt;
&lt;TD&gt;Specifies the size in GB of the data disk of the virtual machine.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;dataDiskCaching&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the caching requirements for the data disks.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;enableMicrosoftEntraIdAuth&lt;/TD&gt;
&lt;TD&gt;bool&lt;/TD&gt;
&lt;TD&gt;Specifies whether to enable Microsoft Entra ID authentication on the virtual machine.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;enableAcceleratedNetworking&lt;/TD&gt;
&lt;TD&gt;bool&lt;/TD&gt;
&lt;TD&gt;Specifies whether to enable accelerated networking on the virtual machine.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;tags&lt;/TD&gt;
&lt;TD&gt;object&lt;/TD&gt;
&lt;TD&gt;Specifies the resource tags for all the resources.&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;userObjectId&lt;/TD&gt;
&lt;TD&gt;string&lt;/TD&gt;
&lt;TD&gt;Specifies the object ID of a Microsoft Entra ID user.&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We suggest reading sensitive configuration data such as passwords or SSH keys from a pre-existing Azure Key Vault resource. For more information, see&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/bicep/parameter-files?tabs=Bicep" target="_blank" rel="nofollow noopener"&gt;Create parameters files for Bicep deployment&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H2 class="heading-element" dir="auto" tabindex="-1"&gt;Getting Started&lt;/H2&gt;
&lt;A id="user-content-getting-started" class="anchor" href="https://github.com/Azure-Samples/azure-ai-studio-secure-bicep/blob/main/bicep/managedvnet/README.md#getting-started" target="_blank" rel="noopener" aria-label="Permalink: Getting Started"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;&lt;SPAN&gt;To set up the infrastructure for the secure Azure AI Studio, you will need to install the necessary prerequisites and follow the steps below.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H3 class="heading-element" dir="auto" tabindex="-1"&gt;Prerequisites&lt;/H3&gt;
&lt;A id="user-content-prerequisites" class="anchor" href="https://github.com/Azure-Samples/azure-ai-studio-secure-bicep/blob/main/bicep/managedvnet/README.md#prerequisites" target="_blank" rel="noopener" aria-label="Permalink: Prerequisites"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;Before you begin, ensure you have the following:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL dir="auto"&gt;
&lt;LI&gt;An active&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/en-us/free/" target="_blank" rel="nofollow noopener"&gt;Azure subscription&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Azure CLI installed on your local machine. Follow the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli" target="_blank" rel="nofollow noopener"&gt;installation guide&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;if needed.&lt;/LI&gt;
&lt;LI&gt;Appropriate permissions to create resources in your Azure account&lt;/LI&gt;
&lt;LI&gt;Basic knowledge of using the command line interface&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H3 class="heading-element" dir="auto" tabindex="-1"&gt;Step 1: Clone the Repository&lt;/H3&gt;
&lt;A id="user-content-step-1-clone-the-repository" class="anchor" href="https://github.com/Azure-Samples/azure-ai-studio-secure-bicep/blob/main/bicep/managedvnet/README.md#step-1-clone-the-repository" target="_blank" rel="noopener" aria-label="Permalink: Step 1: Clone the Repository"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;Start by cloning the repository to your local machine:&lt;/P&gt;
&lt;DIV class="highlight highlight-source-shell notranslate position-relative overflow-auto" dir="auto"&gt;
&lt;DIV class="zeroclipboard-container"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="zeroclipboard-container"&gt;&lt;LI-CODE lang="bash"&gt;git clone &amp;lt;repository_url&amp;gt;
cd bicep&lt;/LI-CODE&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H3 class="heading-element" dir="auto" tabindex="-1"&gt;Step 2: Configure Parameters&lt;/H3&gt;
&lt;A id="user-content-step-2-configure-parameters" class="anchor" href="https://github.com/Azure-Samples/azure-ai-studio-secure-bicep/blob/main/bicep/managedvnet/README.md#step-2-configure-parameters" target="_blank" rel="noopener" aria-label="Permalink: Step 2: Configure Parameters"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;Edit the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/Azure-Samples/azure-ai-studio-secure-bicep/blob/main/bicep/managedvnet/main.bicepparam" target="_blank" rel="noopener"&gt;main.bicepparam&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;parameters file to configure values for the parameters required by the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/bicep/file" target="_blank" rel="nofollow noopener"&gt;Bicep&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;templates. Make sure you set appropriate values for resource group name, location, and other necessary parameters in the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/Azure-Samples/azure-ai-studio-secure-bicep/blob/main/bicep/managedvnet/deploy.sh" target="_blank" rel="noopener"&gt;deploy.sh&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;Bash script.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H3 class="heading-element" dir="auto" tabindex="-1"&gt;Step 3: Deploy Resources&lt;/H3&gt;
&lt;A id="user-content-step-3-deploy-resources" class="anchor" href="https://github.com/Azure-Samples/azure-ai-studio-secure-bicep/blob/main/bicep/managedvnet/README.md#step-3-deploy-resources" target="_blank" rel="noopener" aria-label="Permalink: Step 3: Deploy Resources"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;Use the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/Azure-Samples/azure-ai-studio-secure-bicep/blob/main/bicep/managedvnet/deploy.sh" target="_blank" rel="noopener"&gt;deploy.sh&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;Bash script to deploy the Azure resources via Bicep. This script will provision all the necessary resources as defined in the Bicep templates.&lt;/P&gt;
&lt;P&gt;Run the following command to deploy the resources:&lt;/P&gt;
&lt;DIV class="highlight highlight-source-shell notranslate position-relative overflow-auto" dir="auto"&gt;
&lt;DIV class="zeroclipboard-container"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="zeroclipboard-container"&gt;&lt;LI-CODE lang="bash"&gt;./deploy.sh --resourceGroupName &amp;lt;resource-group-name&amp;gt; --location &amp;lt;location&amp;gt; --virtualNetworkResourceGroupName &amp;lt;client-virtual-network-resource-group-name&amp;gt;&lt;/LI-CODE&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H3 class="heading-element" dir="auto" tabindex="-1"&gt;How to Test&lt;/H3&gt;
&lt;A id="user-content-how-to-test" class="anchor" href="https://github.com/Azure-Samples/azure-ai-studio-secure-bicep/blob/main/bicep/managedvnet/README.md#how-to-test" target="_blank" rel="noopener" aria-label="Permalink: How to Test"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;By following these steps, you will have Azure AI Studio set up and ready for your projects using Bicep. If you encounter any issues, refer to the additional resources or seek help from the Azure support team.&lt;/P&gt;
&lt;P&gt;After deploying the resources, you can verify the deployment by checking the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://portal.azure.com/" target="_blank" rel="nofollow noopener"&gt;Azure Portal&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;or&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://ai.azure.com/build" target="_blank" rel="nofollow noopener"&gt;Azure AI Studio&lt;/A&gt;. Ensure all the resources are created and configured correctly.&lt;/P&gt;
&lt;P&gt;You can also follow these&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/Azure-Samples/azure-ai-studio-secure-bicep/blob/main/promptflow/README.md" target="_blank" rel="noopener"&gt;instructions&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;to deploy, expose, and call the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/microsoft/promptflow/tree/main/examples/flows/chat/chat-basic" target="_blank" rel="noopener"&gt;Basic Chat&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;prompt flow using Bash scripts and Azure CLI.&lt;/P&gt;
&lt;DIV id="tinyMceEditorpaolosalvatori_7" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;</description>
      <pubDate>Fri, 20 Sep 2024 14:25:42 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-partners/deploy-secure-azure-ai-studio-with-a-managed-virtual-network/m-p/4251073#M221</guid>
      <dc:creator>paolosalvatori</dc:creator>
      <dc:date>2024-09-20T14:25:42Z</dc:date>
    </item>
    <item>
      <title>Maximizing Azure Resource Insights: Introducing the Azure Inventory Gateway</title>
      <link>https://techcommunity.microsoft.com/t5/azure-partners/maximizing-azure-resource-insights-introducing-the-azure/m-p/4242019#M220</link>
      <description>&lt;H2&gt;Introduction&lt;/H2&gt;
&lt;P&gt;Azure Resource Graph (ARG) is an efficient tool for querying Azure resources, but it doesn’t always provide the full scope of information you need about them, especially when it comes to in-depth details.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This is where &lt;STRONG&gt;Resource Inventory Gateway&lt;/STRONG&gt; comes into play. ARG’s limitations - such as not retrieving all attributes or configurations for a resource – force users to make additional API calls to Azure Resource Manager (ARM). Instead of manually querying multiple resources, this gateway automates and consolidates those calls, returning a complete data set.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;TLDR; &lt;A title="GitHub Repository" href="https://github.com/Azure-Samples/resource-inventory-gateway" target="_blank" rel="noopener"&gt;GitHub Repository&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Why we build the Azure Inventory Gateway?&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;ARG (Azure Resource Graph) missing attributes:&lt;/STRONG&gt;
&lt;UL class="lia-list-style-type-circle"&gt;
&lt;LI&gt;Azure Resource Graph is a powerful service that allows you to explore and query your Azure resources across multiple subscriptions quickly and efficiently, providing detailed insights without needing individual API calls but it is not includes all ARM attributes.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure Resource Manager (ARM) Rest API capability&lt;/STRONG&gt;
&lt;UL class="lia-list-style-type-circle"&gt;
&lt;LI&gt;ARM is designed to handle queries for a single resource at a time.&lt;/LI&gt;
&lt;LI&gt;To retrieve data from multiple resources, whether across a single subscription or multiple subscriptions, you need to execute multiple API calls.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The Azure Inventory Gateway simplify the achievement of this capability.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;What is Azure Inventory Gateway?&lt;/H2&gt;
&lt;P&gt;Azure Inventory Gateway is a .NET-based gateway that simplifies the management and aggregation of REST API calls.&lt;/P&gt;
&lt;P&gt;This solution provides an endpoint where you can send an API call along with variables. It processes the request and returns a unified response by aggregating the results from the APIs calls.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Currently it is including 2 interface gateways:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;ARM Gateway&lt;/STRONG&gt; – Interact with Azure Resource Manager (ARM) REST API.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Cost Gateway&lt;/STRONG&gt; – Interact with Azure Cost Management REST API.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;H2&gt;Azure Workbook&lt;/H2&gt;
&lt;P&gt;&lt;A title="Workbooks" href="https://learn.microsoft.com/en-us/azure/azure-monitor/visualize/workbooks-overview" target="_blank" rel="noopener"&gt;Workbooks&lt;/A&gt; provide a flexible canvas for data analysis and the creation of rich visual reports within the Azure portal. They allow you to tap into multiple &lt;A title="data sources" href="https://learn.microsoft.com/en-us/azure/azure-monitor/visualize/workbooks-data-sources" target="_blank" rel="noopener"&gt;data sources&lt;/A&gt; from across Azure and combine them into unified interactive experiences.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;You can use the Azure inventory Gateway in Azure workbooks.&lt;/LI&gt;
&lt;LI&gt;To make a call to the Azure Inventory Gateway, you need to use the &lt;A style="font-family: inherit; background-color: #ffffff;" title="custom endpoint data source" href="https://learn.microsoft.com/en-us/azure/azure-monitor/visualize/workbooks-data-sources#custom-endpoint" target="_blank" rel="noopener"&gt;custom endpoint data source&lt;/A&gt;&lt;SPAN&gt; and provide the appropriate API call along with the necessary variables.&lt;/SPAN&gt;
&lt;UL class="lia-list-style-type-circle"&gt;
&lt;LI&gt;You can find all details &lt;A style="font-family: inherit; background-color: #ffffff;" title="here" href="https://github.com/Azure-Samples/resource-inventory-gateway?tab=readme-ov-file#how-it-works" target="_blank" rel="noopener"&gt;here&lt;/A&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Examples&lt;/H3&gt;
&lt;H4&gt;Azure OpenAI Model Deployments&lt;/H4&gt;
&lt;P&gt;List all deployment models across all my Azure OpenAI resources&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This provides a centralized view of your deployment models across subscriptions and resources.&lt;/P&gt;
&lt;P&gt;If you need to locate where a specific model is deployed, you can now easily search across all your instances.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Azure OpenAI Models&lt;/H4&gt;
&lt;P&gt;List all supported models across all my Azure OpenAI resources&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This provides a centralized view of your supported models, versions, lifecycle (Generally Available, Preview), creation date, etc. across subscriptions and resources.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you need to locate where a specific model or version is supported, you can now easily search across all your instances.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Conclusion&lt;/H2&gt;
&lt;P&gt;Azure Inventory Gateway provides a streamlined approach to accessing complete resource data, eliminating the need for multiple ARM queries and delivering unified results. This tool saves time, automates the complexity of resource management, and expands the capabilities of Azure Resource Graph.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To get started, check out the &lt;A title="GitHub repository" href="https://github.com/Azure-Samples/resource-inventory-gateway" target="_blank" rel="noopener"&gt;GitHub repository&lt;/A&gt;, and integrate it into your resource management workflows today. For any queries or to contribute, feel free to connect via the repo or &lt;A title="submit feedback" href="https://github.com/Azure-Samples/resource-inventory-gateway/issues" target="_blank" rel="noopener"&gt;submit feedback&lt;/A&gt;!&lt;/P&gt;
&lt;P&gt;&lt;LI-WRAPPER&gt;&lt;/LI-WRAPPER&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 12 Sep 2024 07:30:20 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-partners/maximizing-azure-resource-insights-introducing-the-azure/m-p/4242019#M220</guid>
      <dc:creator>Dolev_Shor</dc:creator>
      <dc:date>2024-09-12T07:30:20Z</dc:date>
    </item>
    <item>
      <title>Client-Side Compute: A Greener Approach to Natural Language Data Queries</title>
      <link>https://techcommunity.microsoft.com/t5/azure-partners/client-side-compute-a-greener-approach-to-natural-language-data/m-p/4221761#M219</link>
      <description>&lt;H2&gt;&lt;SPAN&gt;Introduction&lt;/SPAN&gt;&lt;/H2&gt;
&lt;DIV&gt;&lt;SPAN&gt;Using natural language to interact with data can significantly enhance our ability to work with and understand information, making data more accessible and useful for everyone. Considering the latest advances in large language models (LLMs), it seems like the obvious solution. However, while we've made strides in interacting with unstructured data using NLP and AI, structured data interaction still poses challenges. Using LLMs to convert natural language into domain-specific languages like SQL is a common and valid use case, showcasing a strong capability of these models. This blog identifies the limitations of current solutions and introduces novel, energy-efficient approaches to enhance efficiency and flexibility.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV&gt;&lt;SPAN&gt;My team focuses on ISVs and how each design decision impacts them. For example, if the ISV needs to allow "chat with data" as a solution, they must also address the challenges of hosting, monetizing, and securing these features. We present two key strategies:&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL class="lia-list-style-type-disc"&gt;
&lt;LI&gt;&lt;SPAN&gt;Leveraging deterministic tools to execute the domain-specific language on the appropriate systems and&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt; Offloading compute to client devices.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV&gt;&lt;SPAN&gt;These strategies not only improve performance and scalability but also reduce server load, making them ideal for ISVs looking to provide seamless and sustainable data access to their customers.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN&gt;The Challenge: Efficiently Interacting with Structured Data&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV&gt;&lt;SPAN&gt;Structured data, typically stored in databases, structured files, and spreadsheets, is the backbone of business intelligence and analytics. However, querying and extracting insights from this data often requires knowledge of specific query languages like SQL, creating a barrier for many users. Additionally, ISVs face the challenge of anticipating the diverse ways their customers want to interact with their data. Due to increasing customer demand for natural language interfaces to simplify and intuitively access their data, ISVs are pressured to develop solutions that bridge the gap between users and the structured data they need to interact with.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV&gt;&lt;SPAN&gt;While using LLMs to convert natural language queries into domain-specific languages such as SQL is a powerful capability, it alone doesn't solve the problem. The next step is to execute these queries efficiently on the appropriate systems. Implementing such a solution must include several fundamental guardrails to ensure the generated SQL is safe to execute. Moreover, there is the additional challenge of managing the computational load. Hosting these capabilities on ISV servers can be resource-intensive and costly.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV&gt;&lt;SPAN&gt;Therefore, an effective solution must not only translate natural language into executable queries but also optimize how these queries are processed. This involves leveraging deterministic tools to execute domain-specific languages and offloading compute tasks to client devices. By doing so, ISVs can provide more efficient, scalable, and cost-effective data interaction solutions to their customers.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;H2&gt;&lt;SPAN&gt;A Common Use Case&lt;/SPAN&gt;&lt;/H2&gt;
&lt;DIV&gt;&lt;SPAN&gt;An ISV collects data from various sources, some public and most from its customers (or tenants). These tenants could come from various industries such as retail, healthcare, and finance, each requiring tailored data solutions. The ISV implements a medallion pattern for data ingestion, a design pattern that organizes data into layers (bronze, silver, and gold) to ensure data quality and accessibility. In this pattern, raw data is ingested into the bronze layer, cleaned and enriched into the silver layer, and then aggregated into the gold layer for analysis. The gold tables, containing the aggregated data, are generally smaller than 20MB per tenant.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;BR /&gt;
&lt;DIV&gt;&lt;SPAN&gt;The data ingestion pipeline runs periodically, populating the gold tables hosted on Azure SQL Database. Data isolation is managed using row-level security or multiple schemas, tailored to the ISV's requirements. The next step for the ISV is to provide access for its tenants to the data through a web application, leveraging homegrown dashboards and reporting capabilities. Often, these ISVs are small companies that do not have the resources to implement a full Business Continuity and Disaster Recovery (BCDR) approach or afford paid tools like Power BI, and thus rely on homegrown or free packages.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;BR /&gt;
&lt;DIV&gt;&lt;SPAN&gt;Despite having a robust infrastructure, the ISV faces several challenges:&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;BR /&gt;
&lt;UL class="lia-list-style-type-disc"&gt;
&lt;LI&gt;&lt;STRONG&gt;Complex Query Language&lt;/STRONG&gt;&lt;SPAN&gt;: Users often struggle with the complexity of SQL or other query languages required to extract insights from the data. This creates a barrier to effective data utilization.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Performance and Scalability&lt;/STRONG&gt;&lt;SPAN&gt;: The server load increases significantly with complex queries, especially when multiple tenants access the data simultaneously. This can lead to performance bottlenecks and scalability issues.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Cost and Resource Management&lt;/STRONG&gt;&lt;SPAN&gt;: Hosting the necessary computational resources to handle data queries on the ISV’s servers is resource-intensive and costly. This includes maintaining high-performance databases and application servers.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;User Experience&lt;/STRONG&gt;&lt;SPAN&gt;: Customers increasingly demand the ability to interact with their data using natural language, expecting a seamless and intuitive user experience.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV&gt;&lt;SPAN&gt;For more detailed information on the medallion pattern, you can refer to &lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/onelake/onelake-medallion-lakehouse-architecture" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;this link&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;&lt;img /&gt;&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;H3&gt;The architecture diagram above illustrates the current setup:&lt;/H3&gt;
&lt;DIV&gt;&lt;BR /&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Data Sources&lt;/STRONG&gt;&lt;SPAN&gt;: Public sources and tenant data are ingested into the system.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Storage&lt;/STRONG&gt;&lt;SPAN&gt;: The data lake (or lake house) process the data from multiple sources, perform cleansing, and store the data in the gold tables periodically.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Orchestrator&lt;/STRONG&gt;&lt;SPAN&gt;: Orchestrating ELT/ETL is done using Azure Fabric/Synapse or Azure Data Factory pipelines.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Serving&lt;/STRONG&gt;&lt;SPAN&gt;: The web application is hosted on Azure App Service, the data is queried using Azure SQL Database.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Visualize&lt;/STRONG&gt;&lt;SPAN&gt;: Data is reported using Power BI or other reporting tools, including home grown dashboards.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV&gt;
&lt;H2&gt;&lt;SPAN&gt;Enhanced Approach: Energy-Efficient Data Interaction&lt;/SPAN&gt;&lt;/H2&gt;
&lt;BR /&gt;
&lt;DIV&gt;&lt;SPAN&gt;To address the challenges mentioned earlier, the ISV can adopt the following strategies:&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;BR /&gt;
&lt;UL class="lia-list-style-type-disc"&gt;
&lt;LI&gt;&lt;U&gt;&lt;STRONG&gt;Leveraging Deterministic Tools for Query Execution&lt;/STRONG&gt;&lt;/U&gt;&lt;SPAN&gt;:&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Translation&lt;/STRONG&gt;&lt;SPAN&gt;: Utilize LLMs to convert natural language queries into SQL.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Execution&lt;/STRONG&gt;&lt;SPAN&gt;: Create a sandbox environment for each customer's data. This sandbox is hosted on lower-cost storage, such as a storage container per customer, which contains a snapshot of the data they can interact with.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Data Management&lt;/STRONG&gt;&lt;SPAN&gt;: The same data ingestion pipeline that updates the gold table in Azure SQL is adapted to update a customer-specific data set stored in their respective storage container. &lt;/SPAN&gt;&lt;STRONG&gt;The idea is to use SQLite to store the customer-specific data, ensuring it is lightweight and portable.&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Benefits&lt;/STRONG&gt;&lt;SPAN&gt;:&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;U&gt;Efficiency and Security&lt;/U&gt;&lt;SPAN&gt;: Ensures that queries are executed efficiently and securely, leveraging the robust capabilities of SQL databases while minimizing risks. By isolating each customer's data in a sandbox, the need for sophisticated guardrails against bad queries and overloading the reporting database is significantly reduced.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;U&gt;Cost &amp;amp; Energy Savings&lt;/U&gt;&lt;SPAN&gt;: No need to manage or host a dedicated reporting database. Since the customer-specific data is hosted on Azure storage containers, the ISV avoids the costs and energy consumption associated with maintaining high-performance database infrastructure.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;U&gt;Scalability and Reliability&lt;/U&gt;&lt;SPAN&gt;: The ISV does not need to plan for the worst-case scenario of all customers running queries simultaneously, which could impact the health of a centralized reporting database. Each customer's queries are isolated to their data, ensuring system stability and performance.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;U&gt;&lt;STRONG&gt;Offloading Compute to Client Devices&lt;/STRONG&gt;&lt;/U&gt;&lt;SPAN&gt;:&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Data Transmission&lt;/STRONG&gt;&lt;SPAN&gt;: The client-side application ensures it has the current data snapshot available for the customer to work with. For example, it can check the data’s timestamp or use another method to verify if the local data is up-to-date and download the latest version if necessary. This snapshot is encapsulated in portable formats like JSON, SQLite, or Parquet.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Local Processing&lt;/STRONG&gt;&lt;SPAN&gt;: The client-side application processes the data locally using the translated SQL queries.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Benefits&lt;/STRONG&gt;&lt;SPAN&gt;:&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;U&gt;Performance&lt;/U&gt;&lt;SPAN&gt;: Reduces server load, enhances scalability, and provides faster query responses by utilizing the client’s computational resources.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;U&gt;Cost &amp;amp; Energy Savings&lt;/U&gt;&lt;SPAN&gt;: Significant cost savings by reducing the need for high-performance server infrastructure. Hosting a static website and leveraging client devices' processing power also reduces overall energy consumption.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;U&gt;Flexibility&lt;/U&gt;&lt;SPAN&gt;: Ensures that customers always work with the most current data without the need for constant server communication.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;H3&gt;Revised Architecture&lt;/H3&gt;
&lt;DIV&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Data Sources&lt;/STRONG&gt;&lt;SPAN&gt;: Public sources and tenant data are ingested into the system.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Storage&lt;/STRONG&gt;&lt;SPAN&gt;: The data lake (or lake house) process the data from multiple sources, perform cleansing, and store the data in customer specific containers. This enhances security and isolation.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;&lt;STRONG&gt;Orchestrator&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;: Orchestrating ELT/ETL is done using Azure Fabric/Synapse or Azure Data Factory pipelines.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;BR /&gt;
&lt;DIV&gt;&lt;SPAN&gt;The above components are hosted in the ISV's infrastructure.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;The client side web application will pull the data from the customer specific containers and process the data locally. Please visit our &lt;A href="https://github.com/Azure-Samples/aoai-net-starterkit" target="_blank" rel="noopener"&gt;Azure OpenAI .NET Starter Kit&lt;/A&gt; for further reading and understanding - focus on the&amp;nbsp;&lt;STRONG&gt;&lt;EM&gt;07_ChatWithJson&lt;/EM&gt;&lt;/STRONG&gt; and &lt;EM&gt;&lt;STRONG&gt;08_ChatWithData&lt;/STRONG&gt;&lt;/EM&gt; notebooks.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;SPAN&gt;Why This Approach?&lt;/SPAN&gt;&lt;/H2&gt;
&lt;BR /&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN&gt;&lt;STRONG&gt;Efficiency&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;: Data queries are executed locally, reducing the load on the server and improving performance.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;&lt;STRONG&gt;Security&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;: Data is securely isolated within a client-side sandbox, ensuring customers can only query what is provided.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;&lt;STRONG&gt;Cost &amp;amp; Energy Saving&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;: Hosting a static website is significantly cheaper and more energy-efficient than hosting a web application with a database. This approach leverages the processing power of client devices, further reducing infrastructure costs and energy consumption.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;&lt;STRONG&gt;Scalability&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;: By isolating each customer's data in a sandbox, the ISV does not need to worry about the impact of simultaneous queries on a centralized database, ensuring system reliability and scalability.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Flexibility&lt;/STRONG&gt;&lt;SPAN&gt;: Ensures that customers always have access to the most current data without the need for constant server communication.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&lt;SPAN&gt;Potential Downsides and Pitfalls&lt;/SPAN&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Client-Side Performance Variability&lt;/STRONG&gt;&lt;SPAN&gt;: The approach relies on the computational power of client devices.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Data Synchronization&lt;/STRONG&gt;&lt;SPAN&gt;: Ensuring that the local data snapshot on client devices is up-to-date can be challenging. Delays in synchronization could lead to users working with outdated data.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;SPAN&gt;Conclusion&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN&gt;By adopting these strategies, ISVs can provide a more efficient, scalable, and cost-effective solution for natural language querying of structured data. Leveraging deterministic tools for executing domain-specific languages within isolated sandboxes ensures robust and secure query execution. Offloading compute to client devices not only reduces server load but also enhances performance and scalability, providing a seamless and intuitive user experience.&lt;/SPAN&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;</description>
      <pubDate>Tue, 20 Aug 2024 12:37:23 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-partners/client-side-compute-a-greener-approach-to-natural-language-data/m-p/4221761#M219</guid>
      <dc:creator>yodobrin</dc:creator>
      <dc:date>2024-08-20T12:37:23Z</dc:date>
    </item>
    <item>
      <title>Simplify app management with the Azure App Service Insights Workbook</title>
      <link>https://techcommunity.microsoft.com/t5/azure-partners/simplify-app-management-with-the-azure-app-service-insights/m-p/4180008#M218</link>
      <description>&lt;P&gt;&lt;U style="font-family: inherit;"&gt;&lt;img /&gt;&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Introduction&lt;/H2&gt;
&lt;P&gt;Developing, maintaining, and managing multiple applications on Azure App Service Plans can be complex. At scale, the challenge intensifies as you juggle different Subscriptions, Resource Groups, Operating Systems &lt;FONT size="3"&gt;(Linux, Windows)&lt;/FONT&gt;, Regions, Scale (instances), Pricing Plans, and Resource Allocations across various environments.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To simplify this, I have developed an &lt;A title="Azure App Service Insights Workbook" href="https://github.com/dolevshor/Azure-AppServices-Insights" target="_blank" rel="noopener"&gt;Azure App Service Insights Workbook&lt;/A&gt;.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;This tool provides a unified view to help customers easily track and compare different applications.&lt;/LI&gt;
&lt;LI&gt;It designed to empower developers, DevOps, and FinOps teams with insights and streamlined monitoring capabilities.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this blog post, I will explore the benefits of this Workbook for various scenarios and stakeholders, and explain how it can drive cost efficiency.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;TLDR: Get the&amp;nbsp;&lt;A title="Azure App Service Insights Workbook " href="https://github.com/dolevshor/Azure-AppServices-Insights" target="_blank" rel="noopener"&gt;Workbook on GitHub&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Monitor, Track and Compare&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Holistic View:&amp;nbsp;&lt;/STRONG&gt;This workbook provides a holistic view of all your Azure App Services and Azure App Service Plans.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Managed Efficiency:&lt;/STRONG&gt;&amp;nbsp;A simple way to monitor, track and compare your application behaviors.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Cross-App Comparison&lt;/STRONG&gt;: Compare critical metrics across different applications or App Service Plans to identify trends and outliers. This helps you prioritize optimization efforts and ensure consistent performance across all apps.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Smart Decisions:&lt;/STRONG&gt; Whether you have a single app or multiple apps, being able to see the behaviors will help you be efficient and make smart decisions.&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This workbook is not intended to replace monitoring capabilities like Application Insights or 3rd party monitoring tools,&amp;nbsp;it gives a holistic view of all Azure App Services with comparison capabilities, and you can dive into each of them with additional tools.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Azure App Service&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;Azure App Service&amp;nbsp;is an HTTP-based service for hosting web applications, REST APIs, and mobile backends.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Multiple languages and frameworks&lt;/STRONG&gt;&amp;nbsp;– App Service has first-class support for .NET, .NET Core, Java, Node.js, PHP, or Python.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Publish&lt;/STRONG&gt; – You can choose to publish as Code, Containers or Static Web app.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Global scale with high availability&lt;/STRONG&gt;&amp;nbsp;- Scale&amp;nbsp;&lt;A title="Scale Up" href="https://learn.microsoft.com/en-us/azure/app-service/manage-scale-up" target="_blank" rel="noopener"&gt;up&lt;/A&gt;&amp;nbsp;or&amp;nbsp;&lt;A title="Scale Out" href="https://learn.microsoft.com/en-us/azure/azure-monitor/autoscale/autoscale-get-started" target="_blank" rel="noopener"&gt;out&lt;/A&gt;&amp;nbsp;manually or automatically. Host your apps anywhere in Microsoft's global data center infrastructure, and the App Service&amp;nbsp;&lt;A href="https://azure.microsoft.com/support/legal/sla/app-service/" target="_blank" rel="noopener"&gt;SLA&lt;/A&gt;&amp;nbsp;promises high availability.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Operating System&lt;/STRONG&gt; – Run your app on both Windows and&amp;nbsp;&lt;A title="App Service on Linux" href="https://learn.microsoft.com/en-us/azure/app-service/overview#app-service-on-linux" target="_blank" rel="noopener"&gt;Linux&lt;/A&gt;-based environments.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Allocation&lt;/STRONG&gt; - You can host multiple apps on a single App Service Plan. (Ref: &lt;A href="https://learn.microsoft.com/en-us/azure/app-service/overview-hosting-plans#should-i-put-an-app-in-a-new-plan-or-an-existing-plan" target="_blank" rel="noopener"&gt;App Service plans - Azure App Service | Microsoft Learn&lt;/A&gt;)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Holistic View&lt;/H2&gt;
&lt;H3&gt;Filters&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;The workbook supports filtering by Subscriptions, Resource Groups, App Service Plans, Apps.&lt;/LI&gt;
&lt;LI&gt;You can select multiple options (e.g., 2 App Service Plans).&lt;/LI&gt;
&lt;LI&gt;&lt;EM&gt;Time range&lt;/EM&gt; - Viewing the metrics over time.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Overview&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Overview of all your App service Plans, App Services and Staging Slots.
&lt;UL class="lia-list-style-type-circle"&gt;
&lt;LI&gt;&lt;STRONG&gt;App Service Plan: &lt;/STRONG&gt;Location, Operating System, Tiers, Status, Number of hosted Apps, Instances, Maximum Scale.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;App Service/Slot:&amp;nbsp;&lt;/STRONG&gt;Location, Kind, Type, Tier, App Service Plan, State.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Monitor&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Overview of all your App service Plans, App Services and Staging Slots Metrics.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4&gt;App Service Plan&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;App Services and Staging Slots&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Additional Metrics&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Each resource type has different Metrics.&lt;/LI&gt;
&lt;LI&gt;Expanding the &lt;EM&gt;Additional Metrics&lt;/EM&gt; will allow to view and compare all the Metrics supported by the resource type.
&lt;UL class="lia-list-style-type-circle"&gt;
&lt;LI&gt;&lt;STRONG&gt;App Service Plan:&lt;/STRONG&gt; Data, Sockets, TCP, Queue Length&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;App Service:&lt;/STRONG&gt; Requests/Response, Data, HTTP, IO, Garbage Collections, Others&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Staging Slot:&lt;/STRONG&gt;&amp;nbsp;Requests/Response, Data, HTTP, IO, Garbage Collections, App Domains, Others&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;App Service | Data Metrics&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;App Service | Additional Metrics&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Cross-App/Plan Comparison&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;The workbook supports comparing metrics between two or more resources.&lt;/LI&gt;
&lt;LI&gt;Resources can be an App Service Plan, App Service or Staging slot.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;App Service Plan&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;App services&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Java vs .NET 6 Example&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;Both App Services are in the Same App Service Plan.&lt;/LI&gt;
&lt;LI&gt;Different Runtime (Java vs .NET 6).&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Node.js vs .NET 8 Example&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;Different App Service Plan.&lt;/LI&gt;
&lt;LI&gt;Different Regions.&lt;/LI&gt;
&lt;LI&gt;Different Runtime (Node.js vs .NET 8).&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;App Service Plan&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;App Services&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Inventory&lt;/H2&gt;
&lt;P&gt;The Inventory dashboard provides a holistic view of all the App Services resources group by various categories, e.g. Subscriptions, Resource Groups, Tier, Status, etc.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;For Developers&lt;/H2&gt;
&lt;P&gt;Developers often face the challenge of ensuring that a single or multiple applications run smoothly and efficiently.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;This workbook allows you to compare key metrics and can be invaluable in identifying trends, pinpointing issues, validating changes, and ensuring optimal performance across various scenarios.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE border="1" width="100%"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="24.923076923076927%" height="30px"&gt;&lt;STRONG&gt;Scenario&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="37.43589743589743%" height="30px"&gt;&lt;STRONG&gt;Use case&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="37.64102564102564%" height="30px"&gt;&lt;STRONG&gt;How this Workbook helps?&lt;/STRONG&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD height="57px"&gt;Development and Testing&lt;/TD&gt;
&lt;TD height="57px"&gt;
&lt;P&gt;&lt;SPAN&gt;Profiling code, unit testing, and integration testing&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD height="57px"&gt;&lt;SPAN&gt;Compare metrics during different stages of development to ensure consistency and efficiency&lt;/SPAN&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD height="30px"&gt;Application Performance Testing&lt;/TD&gt;
&lt;TD height="30px"&gt;
&lt;P&gt;Load, stress and scalability testing&lt;/P&gt;
&lt;/TD&gt;
&lt;TD height="30px"&gt;&lt;SPAN&gt;Compare pre-test and post-test metrics to identify performance improvements or regressions&lt;/SPAN&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD height="289px"&gt;Comparative Analysis Across Multiple Applications&lt;/TD&gt;
&lt;TD height="289px"&gt;
&lt;P&gt;Developer manages multiple applications and needs to compare their performance to identify which ones require attention&lt;/P&gt;
&lt;/TD&gt;
&lt;TD height="289px"&gt;
&lt;UL class="lia-list-style-type-circle"&gt;
&lt;LI&gt;&lt;STRONG&gt;Unified View:&amp;nbsp;&lt;/STRONG&gt;Aggregate metrics from all applications into a single view for easy comparison.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Benchmarking:&amp;nbsp;&lt;/STRONG&gt;Set benchmarks for key performance indicators and compare each application against these benchmarks.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Identify Outliers:&amp;nbsp;&lt;/STRONG&gt;Quickly identify applications that are underperforming or experiencing issues, allowing the developer to prioritize optimization efforts.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="24.923076923076927%" height="316px"&gt;Diagnosing Performance Bottlenecks&lt;/TD&gt;
&lt;TD width="37.43589743589743%" height="316px"&gt;&lt;SPAN&gt;Investigating issues like slow response times, crashes, or memory leaks to find the root cause&lt;/SPAN&gt;&lt;/TD&gt;
&lt;TD width="37.64102564102564%" height="316px"&gt;
&lt;UL class="lia-list-style-type-circle"&gt;
&lt;LI&gt;&lt;STRONG&gt;Detailed Metrics:&amp;nbsp;&lt;/STRONG&gt;Access detailed metrics on CPU usage, memory consumption, and request latencies to pinpoint performance bottlenecks.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Correlation Analysis:&amp;nbsp;&lt;/STRONG&gt;&lt;SPAN&gt;Correlate and c&lt;/SPAN&gt;ompare different &lt;SPAN&gt;metrics to pinpoint the root cause of issues.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Historical Data:&amp;nbsp;&lt;/STRONG&gt;Review historical performance data to identify trends and recurring issues that may be affecting application performance.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD height="124px"&gt;Third-Party Services&lt;/TD&gt;
&lt;TD height="124px"&gt;
&lt;UL class="lia-list-style-type-circle"&gt;
&lt;LI&gt;Monitoring the impact of third-party libraries and SDKs on resource usage&lt;/LI&gt;
&lt;LI&gt;Ensuring that third-party integrations do not degrade overall performance&lt;/LI&gt;
&lt;/UL&gt;
&lt;/TD&gt;
&lt;TD height="124px"&gt;&lt;SPAN&gt;Ensure third-party services meet performance expectations by comparing historical data&lt;/SPAN&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD height="57px"&gt;Production Monitoring **&lt;/TD&gt;
&lt;TD height="57px"&gt;
&lt;P&gt;&lt;SPAN&gt;Monitoring metric performance&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD height="57px"&gt;
&lt;P&gt;&lt;SPAN&gt;Track historical trends metrics to maintain optimal performance&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="24.923076923076927%" height="344px"&gt;Deployment and Scale **&lt;/TD&gt;
&lt;TD width="37.43589743589743%" height="344px"&gt;
&lt;P&gt;&lt;SPAN&gt;Compare resource usage before and after deployments or scaling to ensure no performance degradation&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="37.64102564102564%" height="344px"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL class="lia-list-style-type-circle"&gt;
&lt;LI&gt;&lt;STRONG&gt;Pre and Post Deployment/Scaling Comparison:&amp;nbsp;&lt;/STRONG&gt;Compare key performance metrics before and after deployment or scaling to assess the impact.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Custom Metrics:&amp;nbsp;&lt;/STRONG&gt;Monitor specific metrics related to the new features or changes introduced in the deployment.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Rollback Indicators:&amp;nbsp;&lt;/STRONG&gt;Quickly identify any negative impacts, such as increased error rates or degraded performance, to make informed rollback decisions if necessary.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="24.923076923076927%" height="218px"&gt;System Upgrades **&lt;/TD&gt;
&lt;TD width="37.43589743589743%" height="218px"&gt;Verifying performance after software upgrades (runtime, new release, new feature, etc.)&lt;/TD&gt;
&lt;TD width="37.64102564102564%" height="218px"&gt;
&lt;UL class="lia-list-style-type-circle"&gt;
&lt;LI&gt;&lt;SPAN&gt;&lt;STRONG&gt;Performance Validation: &lt;/STRONG&gt;Validate that upgrades have not negatively impacted performance unexcepted.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Rollback Indicators:&amp;nbsp;&lt;/STRONG&gt;Quickly identify any negative impacts, such as degraded performance, to make informed rollback decisions if necessary.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;** In collaboration with the DevOps teams.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;For DevOps Engineers&lt;/H2&gt;
&lt;P&gt;DevOps engineers have few performance challenges when managing and operating App Service plans.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE border="1" width="969px"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="240.781px" height="42px"&gt;&lt;STRONG&gt;Scenario&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="362.625px" height="42px"&gt;&lt;STRONG&gt;Use case&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="364.594px" height="42px"&gt;&lt;STRONG&gt;How this Workbook helps?&lt;/STRONG&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="240.781px" height="57px"&gt;Production Monitoring&lt;/TD&gt;
&lt;TD width="362.625px" height="57px"&gt;
&lt;P&gt;Monitoring&amp;nbsp;metric performance&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="364.594px" height="57px"&gt;&lt;SPAN&gt;Track historical trends metrics to maintain optimal performance&lt;/SPAN&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="240.781px" height="85px"&gt;Scale&lt;/TD&gt;
&lt;TD width="362.625px" height="85px"&gt;
&lt;P&gt;&lt;SPAN&gt;Ensuring that the app services can scale effectively to handle varying loads without sacrificing performance.&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="364.594px" height="85px"&gt;&lt;SPAN&gt;Track resource usage to ensure balanced resource allocation&lt;/SPAN&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="240.781px" height="57px"&gt;Performance and Cost ***&lt;/TD&gt;
&lt;TD width="362.625px" height="57px"&gt;
&lt;P&gt;&lt;SPAN&gt;Balancing performance and cost, especially when scaling up/out.&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="364.594px" height="57px"&gt;&lt;SPAN&gt;Track resource usage to ensure efficiency.&lt;/SPAN&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="240.781px" height="87px"&gt;Resource Allocations ***&lt;/TD&gt;
&lt;TD width="362.625px" height="87px"&gt;
&lt;P&gt;&lt;SPAN&gt;Deciding the right SKU of the App Service Plans (e.g.: CPU, memory) and configuring auto-scaling to handle varying loads.&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="364.594px" height="87px"&gt;&lt;SPAN&gt;Track resource usage to ensure resource right-sizing&lt;/SPAN&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="240.781px" height="85px"&gt;Multiple-Environments ***&lt;/TD&gt;
&lt;TD width="362.625px" height="85px"&gt;
&lt;P&gt;&lt;SPAN&gt;Managing configurations across different environments (development, staging, production) and multiple subscriptions.&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="364.594px" height="85px"&gt;&lt;SPAN&gt;Monitor and compare resource usage across different environments for optimal performance&lt;/SPAN&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="240.781px" height="85px"&gt;Multiple-Regions ***&lt;/TD&gt;
&lt;TD width="362.625px" height="85px"&gt;
&lt;P&gt;&lt;SPAN&gt;Managing configurations across different Regions (e.g., West Europe, East US, etc.) and multiple subscriptions.&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="364.594px" height="85px"&gt;&lt;SPAN&gt;Monitor and compare resource usage across different Regions for optimal performance&lt;/SPAN&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;*** In collaboration with the FinOps teams.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;For FinOps (Cost Efficiency)&lt;/H2&gt;
&lt;P&gt;Effective cost management is a critical aspect of cloud operations.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This Workbook provides several capabilities to help track and optimize your cloud spending:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE border="1"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="242px" height="55px"&gt;&lt;STRONG&gt;Scenario&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="364px" height="55px"&gt;&lt;STRONG&gt;Use case&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="366px" height="55px"&gt;&lt;STRONG&gt;How this Workbook helps?&lt;/STRONG&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="242px" height="206px"&gt;Resource Optimization / Over-Provisioning&lt;/TD&gt;
&lt;TD width="364px" height="206px"&gt;&lt;SPAN&gt;Optimizing application code and resource utilization&lt;/SPAN&gt;&lt;/TD&gt;
&lt;TD width="366px" height="206px"&gt;
&lt;UL class="lia-list-style-type-circle"&gt;
&lt;LI&gt;&lt;STRONG&gt;Resource Utilization Insights:&amp;nbsp;&lt;/STRONG&gt;Track usage to identify underutilized resources that can be scaled down or repurposed.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Cost Analysis:&amp;nbsp;&lt;/STRONG&gt;Understand the cost implications of resource usage and make data-driven decisions to optimize both performance and cost.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="242px" height="26px"&gt;Idle Resources&lt;/TD&gt;
&lt;TD width="364px" height="26px"&gt;Identification of unused resources&lt;/TD&gt;
&lt;TD width="366px" height="26px"&gt;
&lt;P&gt;Monitor App Service Plans that have not hosted any applications.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Reservations and Savings Plan&lt;/TD&gt;
&lt;TD&gt;Identification of potential discounts&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;Monitor App Service Plans to understand the utilization and identify potential to apply Reservations or Saving Plans.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Conclusion&lt;/H2&gt;
&lt;P&gt;The &lt;EM&gt;Azure App Service Insights&lt;/EM&gt; Workbook is a simple tool designed to simplify application management for developers, DevOps and FinOps teams. By offering a unified view of key metrics it streamlines operations and enhances efficiency across the board.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Whether you are a developer looking to optimize application performance, a DevOps engineer focused on reliability, or a FinOps engineer aiming to optimize and reduce costs, the Azure App Service Workbook has something valuable to use.&lt;/P&gt;</description>
      <pubDate>Mon, 08 Jul 2024 14:17:30 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-partners/simplify-app-management-with-the-azure-app-service-insights/m-p/4180008#M218</guid>
      <dc:creator>Dolev_Shor</dc:creator>
      <dc:date>2024-07-08T14:17:30Z</dc:date>
    </item>
    <item>
      <title>Evaluating the quality of AI document data extraction with small and large language models</title>
      <link>https://techcommunity.microsoft.com/t5/azure-partners/evaluating-the-quality-of-ai-document-data-extraction-with-small/m-p/4157719#M215</link>
      <description>&lt;P&gt;Evaluating the effectiveness of AI models in document data extraction. Comparing accuracy, speed, and cost-effectiveness between Small and Large Language Models (SLMs and LLMs).&lt;/P&gt;
&lt;H2&gt;Context&lt;/H2&gt;
&lt;P&gt;As the adoption of AI in solutions increases, technical decision-makers face challenges in selecting the most effective approach for document data extraction. Ensuring high quality is crucial, particularly when dealing with critical solutions where minor errors have substantial consequences. As the volume of documents increases, it becomes essential to choose solutions that can scale efficiently without compromising performance.&lt;/P&gt;
&lt;P&gt;This article evaluates AI document data extraction techniques using Small Language Models (SLMs) and Large Language Models (LLMs). Including a specific focus on structured and unstructured data scenarios.&lt;/P&gt;
&lt;P&gt;By evaluating models, the article provides insights into their accuracy, speed, and cost-efficiency for quality data extraction. It provides both guidance in evaluating models, as well as the quality of the outputs from models for specific scenarios.&lt;/P&gt;
&lt;H2&gt;Key challenges of effective document data extraction&lt;/H2&gt;
&lt;P&gt;With many AI models available to ISVs and Startups, challenges arise in which technique is the most effective for quality document data extraction. When evaluating the quality of AI models, key challenges include:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Ensuring high accuracy and reliability.&lt;/STRONG&gt; High accuracy and confidence are crucial, especially for critical applications such as legal or financial documents. Minor errors in data extraction could lead to significant issues. Additionally, robust data validation mechanisms verify the data and minimize false positives and negatives.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Getting results in a timely manner.&lt;/STRONG&gt; As the volume of documents increases, the selected approach must scale efficiently to handle large document quantities without significant impact. Balancing the need for fast processing speeds with maintaining high accuracy levels is challenging.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Balancing cost with accuracy and efficiency.&lt;/STRONG&gt; Ensuring high accuracy and efficiency often requires the most advanced AI models, which can be expensive. Evaluating AI models and techniques highlights the most cost-effective solution without compromising on the quality of the data extraction.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;When choosing an AI model for document data extraction on Azure, there is no one-size-fits-all solution. Depending on the scenario, one may outperform another for accuracy at the sacrifice of cost. While another model may provide sufficient accuracy at a much lower cost.&lt;/P&gt;
&lt;H2&gt;Establishing evaluation techniques for AI models in document data extraction&lt;/H2&gt;
&lt;P&gt;When evaluating AI models for document data extraction, it’s important to understand how they perform for specific use cases. This evaluation focused on structured and unstructured scenarios to provide insights into simple and complex document structures.&lt;/P&gt;
&lt;H3&gt;Evaluation Scenarios&lt;/H3&gt;
&lt;img&gt;Three example document scenarios for evaluating AI models in document data extraction&lt;/img&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Structured Data: Invoices&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;A collection of assorted invoices with varying simple and complex layouts, handwritten signatures, obscured content, and handwritten notes across margins.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Unstructured Data&lt;/STRONG&gt;: &lt;STRONG&gt;Vehicle Insurance Policy&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;A 10+ page vehicle insurance policy document containing both structured and unstructured data, including natural, domain-specific language with inferred data. This scenario focuses on extracting data by combining structured data with the natural language throughout the document.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;H3&gt;Models and Techniques&lt;/H3&gt;
&lt;P&gt;This evaluation focused on multiple techniques for data extraction with the language models:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Markdown Extraction with Azure AI Document Intelligence&lt;/STRONG&gt;. This technique involves converting the document into Markdown using the pre-built layout model in Azure AI Document Intelligence. Read more about this technique in &lt;A href="https://techcommunity.microsoft.com/t5/azure-for-isv-and-startups/using-azure-ai-document-intelligence-and-azure-openai-to-extract/ba-p/4107746" target="_blank" rel="noopener"&gt;our detailed article&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Vision Capabilities of Multi-Modal Language Models&lt;/STRONG&gt;. This technique focuses on GPT-4o and GPT-4o Mini models by converting the document pages to images. This leverages the models’ capabilities to analyze both text and visual elements. Explore this technique in more detail in &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/samples/azure-samples/azure-ai-document-processing-samples/document-processing-with-azure-ai-samples/" target="_blank" rel="noopener"&gt;our sample project&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Comprehensive Combination&lt;/STRONG&gt;. This technique combines both Markdown extraction with vision capable models to enhance the extraction process. Additionally, the layout analysis of Azure AI Document Intelligence will ease the human review of a document if the confidence or accuracy is low.&amp;nbsp;&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;For each technique, the model is prompted using either&amp;nbsp;&lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/azureforisvandstartupstechnicalblog/using-structured-outputs-in-azure-openai%e2%80%99s-gpt-4o-for-consistent-document-data-p/4261737" target="_blank" rel="noopener" data-lia-auto-title="Structured Outputs in GPT-4o" data-lia-auto-title-active="0"&gt;Structured Outputs in GPT-4o&lt;/A&gt; or with inline JSON schemas for other models. This establishes the expected output, improving the overall accuracy of the generated response.&lt;/P&gt;
&lt;P&gt;The AI models evaluated in this analysis include:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://ai.azure.com/explore/models/Phi-3.5-MoE-instruct/version/4/registry/azureml?tid=ffd04b18-2c0c-4078-82eb-4d8558089235#OfferDetails" target="_blank" rel="noopener"&gt;Phi-3.5 MoE&lt;/A&gt;, an SLM deployed as a serverless endpoint in Azure AI Studio&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;A href="https://ai.azure.com/explore/models/gpt-4o/version/1/registry/azure-openai" target="_blank" rel="noopener"&gt;GPT-4o (2024-08-06)&lt;/A&gt;&lt;/STRONG&gt;, an LLM deployed with 10K TPM in Azure OpenAI&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://ai.azure.com/explore/models/gpt-4o-mini/version/2/registry/azure-openai?tid=c84bc78e-5441-41e2-b2ce-25cb27955e89" target="_blank" rel="noopener"&gt;GPT-4o Mini (2024-07-18)&lt;/A&gt;, an LLM deployed with 10K TPM in Azure OpenAI&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Evaluation Methodology&lt;/H3&gt;
&lt;P&gt;To ensure a reliable and consistent evaluation, the following approach was established:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Baseline Accuracy&lt;/STRONG&gt;. A single source of truth for the data extraction results ensures each model’s output is compared against a standard. This approach, while manually intensive, provides a precise measure for accuracy.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Confidence&lt;/STRONG&gt;. To demonstrate when an extraction should be raised up to a human for review, each model provides an internal assessment on how certain it is about its predicted output. Azure OpenAI provides these confidence values as&amp;nbsp;&lt;A class="lia-external-url" href="https://cookbook.openai.com/examples/using_logprobs" target="_blank" rel="noopener"&gt;logprobs&lt;/A&gt;, while Azure AI Document Intelligence returns these confidence scores by default in the response.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Execution Time&lt;/STRONG&gt;. This is calculated based on the time between the initial request for data extraction to the response, without streaming. For scenarios utilizing the Markdown technique, the time is based on the end-to-end processing, including the request and response from Azure AI Document Intelligence.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Cost Analysis&lt;/STRONG&gt;. Using the average input and output tokens from each iteration, the estimated cost per 1,000 pages is calculated, providing a clearer picture of cost-effectiveness at scale.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Consistent Prompting&lt;/STRONG&gt;. Each model has the same system and extraction prompt. The system prompt is consistent across all scenarios as “&lt;STRONG&gt;You are an AI assistant that extracts data from documents&lt;/STRONG&gt;”. Each scenario has its own extraction prompt, including the output schema.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Multiple Iterations&lt;/STRONG&gt;. 10 variants of the document are run per model technique. Every property in the result compares for an exact match against the standard response. This provides the results for accuracy, confidence, execution time, and cost.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;These metrics establish the baseline evaluation. By establishing the baseline, it is possible to experiment with the prompt, schema, and request configuration. This allows you to compare improvements in the overall quality by evaluating the accuracy, confidence, speed, and cost.&lt;/P&gt;
&lt;P&gt;For the evaluation outlined in this article, we created a Python test project with multiple test cases. Each test case is a combination of a specific use case and model. Additionally, each test case is run independently. This is to ensure that the speed is evaluated fairly for each request.&lt;/P&gt;
&lt;P&gt;The tests take advantage of the Python SDKs for both Azure AI Document Intelligence and Azure OpenAI.&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Evaluating AI Models for Structured Data&lt;/H2&gt;
&lt;H3&gt;Complex Invoice Document&lt;/H3&gt;
&lt;DIV class="lia-table-wrapper styles_table-responsive__MW0lN"&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;
&lt;TABLE border="1" width="100%"&gt;&lt;COLGROUP&gt;&lt;COL width="16.6372%" /&gt;&lt;COL width="16.6372%" /&gt;&lt;COL width="16.6372%" /&gt;&lt;COL width="16.6372%" /&gt;&lt;COL width="16.6372%" /&gt;&lt;COL width="16.6372%" /&gt;&lt;/COLGROUP&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD height="59px"&gt;&lt;STRONG&gt;Model&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD height="59px"&gt;&lt;STRONG&gt;Technique&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD height="59px"&gt;&lt;STRONG&gt;Accuracy&lt;BR /&gt;&lt;/STRONG&gt;&lt;STRONG&gt;(95th)&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD height="59px"&gt;&lt;STRONG&gt;Confidence&lt;BR /&gt;(95th)&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD height="59px"&gt;&lt;STRONG&gt;Speed&lt;BR /&gt;&lt;/STRONG&gt;&lt;STRONG&gt;(95th)&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD height="59px"&gt;
&lt;P&gt;&lt;STRONG&gt;Est. Cost&lt;BR /&gt;&lt;/STRONG&gt;&lt;STRONG&gt;(1,000 pages)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD height="27px"&gt;&lt;STRONG&gt;GPT-4o&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD height="27px"&gt;Vision&lt;/TD&gt;
&lt;TD height="27px"&gt;98.99%&lt;/TD&gt;
&lt;TD height="27px"&gt;99.85%&lt;/TD&gt;
&lt;TD height="27px"&gt;22.80s&lt;/TD&gt;
&lt;TD height="27px"&gt;$7.45&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD height="51px"&gt;&lt;STRONG&gt;GPT-4o&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD height="51px"&gt;Vision + Markdown&lt;/TD&gt;
&lt;TD height="51px"&gt;96.60%&lt;/TD&gt;
&lt;TD height="51px"&gt;99.82%&lt;/TD&gt;
&lt;TD height="51px"&gt;22.25s&lt;/TD&gt;
&lt;TD height="51px"&gt;$19.47&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD height="26.8px"&gt;&lt;STRONG&gt;Phi-3.5 MoE&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD height="26.8px"&gt;Markdown&lt;/TD&gt;
&lt;TD height="26.8px"&gt;96.11%&lt;/TD&gt;
&lt;TD height="26.8px"&gt;99.49%&lt;/TD&gt;
&lt;TD height="26.8px"&gt;54.00s&lt;/TD&gt;
&lt;TD height="26.8px"&gt;$10.35&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD height="27px"&gt;&lt;STRONG&gt;GPT-4o&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD height="27px"&gt;Markdown&lt;/TD&gt;
&lt;TD height="27px"&gt;95.66%&lt;/TD&gt;
&lt;TD height="27px"&gt;99.44%&lt;/TD&gt;
&lt;TD height="27px"&gt;31.60s&lt;/TD&gt;
&lt;TD height="27px"&gt;$16.11&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD height="51px"&gt;&lt;STRONG&gt;GPT-4o Mini&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD height="51px"&gt;Vision + Markdown&lt;/TD&gt;
&lt;TD height="51px"&gt;91.84%&lt;/TD&gt;
&lt;TD height="51px"&gt;99.99%&lt;/TD&gt;
&lt;TD height="51px"&gt;56.69s&lt;/TD&gt;
&lt;TD height="51px"&gt;$18.14&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&lt;STRONG&gt;GPT-4o Mini&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD&gt;Vision&lt;/TD&gt;
&lt;TD&gt;79.31%&lt;/TD&gt;
&lt;TD&gt;99.76%&lt;/TD&gt;
&lt;TD&gt;56.71s&lt;/TD&gt;
&lt;TD&gt;$8.02&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD height="27px"&gt;&lt;STRONG&gt;GPT-4o Mini&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD height="27px"&gt;Markdown&lt;/TD&gt;
&lt;TD height="27px"&gt;78.61%&lt;/TD&gt;
&lt;TD height="27px"&gt;99.76%&lt;/TD&gt;
&lt;TD height="27px"&gt;24.52s&lt;/TD&gt;
&lt;TD height="27px"&gt;$10.41&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When processing invoices in our analysis, GPT-4o with Vision capabilities stands out as the most ideal combination. This approach delivers the highest accuracy and confidence scores, effectively handling complex layouts and visual elements. Additionally, it handles this at reasonable speeds at significantly lower costs.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Accuracy&lt;/STRONG&gt; in our evaluation shows that overall, most models in the evaluation can be regarded as having high accuracy. GPT-4o with Vision processing achieves the highest scores for invoices. While our assumptions that providing the additional document text context would increase this, our analysis showed that it's possible to retain high accuracy without it.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Confidence&amp;nbsp;&lt;/STRONG&gt;levels are high across models and techniques, demonstrating that combined with high accuracy, these approaches perform well for automated processing with minimal human intervention.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Speed&lt;/STRONG&gt; is a crucial factor for scalability of a document processing pipeline. For background processing per document, GPT-4o models can process all techniques in a quick timescale. In contrast, small language models like Phi-3.5 MoE are took longer which could impact throughput for large-scale applications.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Cost&lt;/STRONG&gt;-effectiveness is also essential when building a scalable pipeline to process thousands of document pages. GPT-4o with Vision stands out as the most cost-effective at $7.45 per 1,000 pages. However, all models in Vision or Markdown techniques offer high value when also considering their accuracy, confidence, and speed.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;One significant benefit of using GPT-4o with Vision processing is its ability to handle visual elements such as handwritten signatures, obscured content, and stamps. By processing the document as an image, the model minimizes false positives and negatives that can arise when relying solely on text-based Markdown processing.&lt;/P&gt;
&lt;P&gt;Phi-3.5 MoE is a notable highlight when it comes to the use of small language models. The analysis demonstrates these models are just as capable at processing documents into structured JSON outputs as the more advanced large language models.&lt;/P&gt;
&lt;P&gt;For this Invoice analysis, GPT-4o with Vision provides the best balance between accuracy, confidence, speed, and cost. It is particularly adept at handling documents with complex layouts and visual elements, making it a suitable choice for extracting structured data from a diverse range of invoices.&lt;/P&gt;
&lt;/DIV&gt;
&lt;H2&gt;Evaluating AI Models for Unstructured Data&lt;/H2&gt;
&lt;H3&gt;Complex Vehicle Insurance Document&lt;/H3&gt;
&lt;DIV class="lia-table-wrapper styles_table-responsive__MW0lN"&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;
&lt;TABLE border="1" width="100%"&gt;&lt;COLGROUP&gt;&lt;COL width="16.6176%" /&gt;&lt;COL width="16.6176%" /&gt;&lt;COL width="16.6176%" /&gt;&lt;COL width="16.6176%" /&gt;&lt;COL width="16.6176%" /&gt;&lt;COL width="16.6176%" /&gt;&lt;/COLGROUP&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD height="58.8px"&gt;&lt;STRONG&gt;Model&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD height="58.8px"&gt;&lt;STRONG&gt;Technique&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD height="58.8px"&gt;
&lt;P&gt;&lt;STRONG&gt;Accuracy&lt;BR /&gt;(95th)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD height="58.8px"&gt;&lt;STRONG&gt;Confidence&lt;BR /&gt;(95th)&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD height="58.8px"&gt;&lt;STRONG&gt;Speed&lt;BR /&gt;&lt;/STRONG&gt;&lt;STRONG&gt;(95th)&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD height="58.8px"&gt;&lt;STRONG&gt;Est. Cost&lt;BR /&gt;&lt;/STRONG&gt;&lt;STRONG&gt;(1,000 pages)&lt;/STRONG&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD height="50.8px"&gt;&lt;STRONG&gt;GPT-4o&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD height="50.8px"&gt;Vision + Markdown&lt;/TD&gt;
&lt;TD height="50.8px"&gt;100%&lt;/TD&gt;
&lt;TD height="50.8px"&gt;99.35%&lt;/TD&gt;
&lt;TD height="50.8px"&gt;68.93s&lt;/TD&gt;
&lt;TD height="50.8px"&gt;$13.96&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD height="26.8px"&gt;&lt;STRONG&gt;GPT-4o&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD height="26.8px"&gt;Markdown&lt;/TD&gt;
&lt;TD height="26.8px"&gt;98.25%&lt;/TD&gt;
&lt;TD height="26.8px"&gt;89.03%&lt;/TD&gt;
&lt;TD height="26.8px"&gt;134.85s&lt;/TD&gt;
&lt;TD height="26.8px"&gt;$12.24&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD height="26.8px"&gt;&lt;STRONG&gt;GPT-4o&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD height="26.8px"&gt;Vision&lt;/TD&gt;
&lt;TD height="26.8px"&gt;97.04%&lt;/TD&gt;
&lt;TD height="26.8px"&gt;98.71%&lt;/TD&gt;
&lt;TD height="26.8px"&gt;66.24s&lt;/TD&gt;
&lt;TD height="26.8px"&gt;$2.31&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD height="26.8px"&gt;&lt;STRONG&gt;GPT-4o Mini&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD height="26.8px"&gt;Markdown&lt;/TD&gt;
&lt;TD height="26.8px"&gt;93.25%&lt;/TD&gt;
&lt;TD height="26.8px"&gt;89.04%&lt;/TD&gt;
&lt;TD height="26.8px"&gt;99.78s&lt;/TD&gt;
&lt;TD height="26.8px"&gt;$10.12&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD height="26.8px"&gt;&lt;STRONG&gt;GPT-4o Mini&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD height="26.8px"&gt;Vision + Markdown&lt;/TD&gt;
&lt;TD height="26.8px"&gt;82.99%&lt;/TD&gt;
&lt;TD height="26.8px"&gt;99.16%&lt;/TD&gt;
&lt;TD height="26.8px"&gt;101.89s&lt;/TD&gt;
&lt;TD height="26.8px"&gt;$15.71&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD height="26.8px"&gt;&lt;STRONG&gt;GPT-4o Mini&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD height="26.8px"&gt;Vision&lt;/TD&gt;
&lt;TD height="26.8px"&gt;67.25%&lt;/TD&gt;
&lt;TD height="26.8px"&gt;98.73%&lt;/TD&gt;
&lt;TD height="26.8px"&gt;83.01s&lt;/TD&gt;
&lt;TD height="26.8px"&gt;$5.67&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD height="26.8px"&gt;&lt;STRONG&gt;Phi-3.5 MoE&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD height="26.8px"&gt;Markdown&lt;/TD&gt;
&lt;TD height="26.8px"&gt;64.99%&lt;/TD&gt;
&lt;TD height="26.8px"&gt;88.28%&lt;/TD&gt;
&lt;TD height="26.8px"&gt;102.89s&lt;/TD&gt;
&lt;TD height="26.8px"&gt;$10.16&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When extracting structured data from large, unstructured documents, such as insurance policies, the combination of GPT-4o with both Vision and Markdown techniques proves to be the most ideal solution. This hybrid approach leverages the visual context of the document's layout alongside the structured textual representation, resulting in the highest degrees of accuracy and confidence. It effectively handles the complexity of domain-specific language and inferred fields, providing a comprehensive and precise extraction process.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Accuracy&lt;/STRONG&gt; is spread across all models when extracting data from larger quantities of unstructured text. GPT-4o utilizing both Vision and Markdown demonstrates the effectiveness of combining visual and textual context for documents containing natural language.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Confidence&amp;nbsp;&lt;/STRONG&gt;varies also in comparison to the Invoice analysis, with less certainty from the models when extracting from large blocks of text. However, analyzing the confidence scores of GPT-4o for each technique shows that building on them towards a comprehensive approach yields higher confidence.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Speed&lt;/STRONG&gt; of execution will naturally increase as the number of pages, complexity of layout, and quantity of text increases. These techniques for large, unstructured documents are likely to be reserved for background, batch processing than real-time applications.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Cost&lt;/STRONG&gt; varies when utilizing multiple Azure services to perform document data extraction. However, the overall cost for GPT-4o with both Vision and Markdown demonstrates where utilizing multiple AI services to achieve a goal can yield exceptional accuracy and confidence. This leads to automated solutions that require minimal human intervention.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The combination of Vision and Markdown techniques can offer a highly efficient approach to structured document data extraction. However, while highly accurate, models like GPT-4o and 4o Mini are bound by their maximum context window of 128K tokens. When processing text and images in a single request, you may need to consider &lt;A class="lia-external-url" href="https://github.com/Azure-Samples/azure-ai-document-processing-samples?tab=readme-ov-file#samples" target="_blank" rel="noopener"&gt;chunking or classification techniques&lt;/A&gt; to break down large documents into smaller document boundaries.&lt;/P&gt;
&lt;P&gt;Highlighting the specific capabilities of Phi-3.5 MoE, it falls short in accuracy. This lower performance indicates limitations in handling large, complex natural language that requires understanding and inference to extract data accurately. While optimizations can be made in prompts to improve accuracy, this analysis highlights the importance of evaluating and selecting a model and technique that aligns with the specific demands of your document extraction scenarios.&lt;/P&gt;
&lt;H2&gt;Key Evaluation Findings&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN style="font-family: var(--lia-blog-font-family); background-color: var(--lia-rte-bg-color); color: var(--lia-bs-body-color); font-size: var(--lia-bs-font-size-base); font-style: var(--lia-font-style-base); font-weight: var(--lia-bs-font-weight-base);"&gt;&lt;STRONG&gt;Accuracy&lt;/STRONG&gt;: For most extraction scenarios, advanced large language models like GPT-4o consistently deliver high accuracy and confidence levels. They are particularly effective at managing complex layouts and accurately extracting data from both visual and text context.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Cost-Effectiveness&lt;/STRONG&gt;: Language models with vision capabilities are highly cost-effective for large-scale processing, with GPT-4o demonstrating costs below $10 per 1,000 pages in all scenarios where vision was used solely. However, the cost-benefit of using a hybrid Vision and Markdown approach can be justified in certain scenarios where high precision is required.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Speed&lt;/STRONG&gt;: The time of execution for document varies depending on the number of pages, layout complexity, and quantity of text. For most scenarios, using language models for document data extraction demonstrates the capabilities for large-scale background processing, rather than real-time applications.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Limitations&lt;/STRONG&gt;: Smaller models, like Phi-3.5 MoE, indicate limitations when handling complex documents with large unstructured text. However, they excel with minimal prompting for smaller, structured documents, such as invoices.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Comprehensive Techniques&lt;/STRONG&gt;: Combining both text and vision techniques provides an effective strategy for highly accurate, highly confident data extraction from documents. The approach enhances the extraction, particularly for documents that include complex layout, visual elements, and complex, domain-specific, natural language.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Recommendations for Evaluating AI Models in Document Data Extraction&lt;/H2&gt;
&lt;P&gt;&lt;!-- wp:list --&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;High-Accuracy Solutions.&lt;/STRONG&gt; For solutions where accuracy is critical or visual elements must be evaluated, such as medical records, legal cases, or financial reports, explore GPT-4o with both Vision and Markdown capabilities. Its high performance in accuracy and confidence justifies the investment.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Text-Based or Self-Hosted Solutions.&amp;nbsp;&lt;/STRONG&gt;For text-based document extractions where self-hosting a model is necessary, small open language models, such as Phi-3.5 MoE, can provide high accuracy in data extraction comparable to OpenAI's GPT-4o.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Adopt Evaluation Techniques.&lt;/STRONG&gt;&amp;nbsp;Implement a rigorous evaluation methodology like the one used in this analysis. Establishing a baseline for accuracy, speed, and cost through multiple iterations and consistent prompting ensures reliable and comparable results. Regularly conduct evaluations when considering new techniques, models, prompts, and configurations. This helps in making informed decisions when opting for an approach in your specific use cases.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Read more on AI Document Intelligence&lt;/H2&gt;
&lt;P&gt;Thank you for taking the time to read this article. We are sharing our insights for ISVs and Startups that enable document intelligence in their AI-powered solutions, based on real-world challenges we encounter. We invite you to continue your learning through our additional insights in this series.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;DIV class="h1-like-title"&gt;&lt;STRONG&gt;&lt;A title="Optimizing Data Extraction Accuracy with Custom Models in Azure AI Document Intelligence" href="https://techcommunity.microsoft.com/t5/azure-for-isv-and-startups/optimizing-data-extraction-accuracy-with-custom-models-in-azure/ba-p/4095089" target="_self"&gt;Optimizing Data Extraction Accuracy with Custom Models in Azure AI Document Intelligence&lt;/A&gt;&lt;/STRONG&gt;&lt;/DIV&gt;
&lt;UL&gt;
&lt;LI&gt;Discover how to enhance data extraction accuracy with Azure AI Document Intelligence by tailoring models to your unique document structures.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="h1-like-title"&gt;&lt;STRONG&gt;&lt;A title="Using Azure AI Document Intelligence and Azure OpenAI to extract structured data from documents" href="https://techcommunity.microsoft.com/t5/azure-for-isv-and-startups/using-azure-ai-document-intelligence-and-azure-openai-to-extract/ba-p/4107746" target="_self"&gt;Using Azure AI Document Intelligence and Azure OpenAI to extract structured data from documents&lt;/A&gt;&lt;/STRONG&gt;&lt;/DIV&gt;
&lt;UL&gt;
&lt;LI&gt;Discover how Azure AI Document Intelligence and Azure OpenAI efficiently extract structured data from documents, streamlining document processing workflows for AI-powered solutions.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;A title="Using Structured Outputs in Azure OpenAI’s GPT-4o for consistent document data processing" href="https://techcommunity.microsoft.com/t5/azure-for-isv-and-startups/using-structured-outputs-in-azure-openai-s-gpt-4o-for-consistent/ba-p/4261737" target="_self"&gt;Using Structured Outputs in Azure OpenAI’s GPT-4o for consistent document data processing&lt;/A&gt;&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;Discover how to&amp;nbsp;leverage GPT-4o’s Structured Outputs to ensure reliable, schema-compliant document data processing.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Further Reading&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://azure.microsoft.com/en-us/products/phi/" target="_blank" rel="noopener"&gt;Phi Open Models - Small Language Models | Microsoft Azure&lt;/A&gt;&lt;BR /&gt;
&lt;UL&gt;
&lt;LI&gt;&amp;nbsp;Learn more about the Phi-3 small language models and their potential, including running effectively in offline environments.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/advanced-prompt-engineering?pivots=programming-language-chat-completions" target="_blank" rel="noopener"&gt;Prompt engineering techniques with Azure OpenAI | Microsoft Learn&lt;/A&gt;
&lt;UL&gt;
&lt;LI&gt;Discover how to improve your prompting techniques with Azure OpenAI to maximize the accuracy of your document data extraction.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://github.com/Azure-Samples/azure-ai-document-processing-samples?tab=readme-ov-file#samples" target="_blank" rel="noopener"&gt;Samples demonstrating techniques for processing documents with Azure AI | GitHub&lt;/A&gt;
&lt;UL&gt;
&lt;LI&gt;A collection of samples that demonstrate both the document data extraction techniques used in this analysis, as well as techniques for classification.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Mon, 25 Nov 2024 10:49:20 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-partners/evaluating-the-quality-of-ai-document-data-extraction-with-small/m-p/4157719#M215</guid>
      <dc:creator>james_croft</dc:creator>
      <dc:date>2024-11-25T10:49:20Z</dc:date>
    </item>
    <item>
      <title>AKS Edge Essentials: A Lightweight “Easy Button” for Linux Containers on Windows Hosts</title>
      <link>https://techcommunity.microsoft.com/t5/azure-partners/aks-edge-essentials-a-lightweight-easy-button-for-linux/m-p/4136011#M214</link>
      <description>&lt;P&gt;[Note: This post was revised on November 26, 2024.&amp;nbsp; The change was in the EFLOW section due to product direction changes.]&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hello, Mike Bazarewsky writing again, now on our shiny new ISV blog!&amp;nbsp; My topic today is on a product that hasn’t gotten a huge amount of press, but actually brings some really nice capabilities to the table, especially with respect to IoT scenarios as we look to the future with Azure IoT Operations.&amp;nbsp; That product is &lt;A href="https://learn.microsoft.com/azure/aks/hybrid/aks-edge-overview" target="_blank" rel="noopener"&gt;AKS Edge Essentials&lt;/A&gt;, or AKS-EE for short.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;What did Microsoft have before AKS-EE?&lt;/H2&gt;
&lt;P&gt;AKS-EE is intended to be the “easy button” for running Linux-based and/or Windows-based containers on a Windows host, including a &lt;A href="https://learn.microsoft.com/windows/iot/iot-enterprise/getting_started" target="_blank" rel="noopener"&gt;Windows IoT Enterprise&lt;/A&gt; host.&amp;nbsp; It’s been possible to run Docker-hosted containers on Windows for a long time, and it’s even been possible to run orchestrators including Kubernetes on Windows for some time now.&amp;nbsp; There’s even &lt;A href="https://learn.microsoft.com/virtualization/windowscontainers/kubernetes/getting-started-kubernetes-windows" target="_blank" rel="noopener"&gt;formal documentation&lt;/A&gt; on how to do so in Microsoft Learn.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Meanwhile, in parallel, and specific to IoT use cases, Microsoft offers &lt;A href="https://learn.microsoft.com/azure/iot-edge/iot-edge-for-linux-on-windows?view=iotedge-1.4" target="_blank" rel="noopener"&gt;Azure IoT Edge for Linux on Windows&lt;/A&gt;, or EFLOW for short.&amp;nbsp; EFLOW offers the Azure IoT Edge container orchestrator on a Windows host by leveraging a Linux virtual machine.&amp;nbsp; That virtual machine runs a customized deployment of &lt;A href="https://github.com/microsoft/CBL-Mariner" target="_blank" rel="noopener"&gt;CBL-Mariner&lt;/A&gt;, Microsoft’s first-party Linux distribution designed for secure, cloud-focused use cases.&amp;nbsp; As an end-to-end Microsoft offering on a Microsoft platform, EFLOW is updated through Microsoft Update and as such, “plays nice” with the rest of the Windows ecosystem and bringing the benefits of that ecosystem while allowing running targeted Linux containers to run with a limited amount of “ceremony”.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;What does AKS-EE bring to the table?&lt;/H2&gt;
&lt;P&gt;Taking this information all into account, it’s reasonable to ask “What are the gaps?&amp;nbsp; Why would it make sense to bring another product into the space?”&amp;nbsp; The answer is two-fold:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;For some ISVs, particularly those coming from traditional development models (e.g. IoT developers, web service developers), the move to “cloud native” technologies such as containers is a substantial shift on its own, before worrying about deployment and management of an orchestrator.&amp;nbsp; However, an orchestrator is still something those ISVs need to be able to get to scalability and observability as they work through their journey of “modernization” around containers.&lt;/LI&gt;
&lt;LI&gt;EFLOW works very, very well for its intended target, which is Azure IoT Edge.&amp;nbsp; However, that is a specialized use case that does not generalize well to general application workloads.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;There is a hidden point here as well.&amp;nbsp; Windows containers are a popular option in many organizations, but Linux containers are more common.&amp;nbsp; At the same time, many enterprises (and thus, ISV customers) prefer the management, hardware support, and long-term OS support paths that Windows offers.&amp;nbsp; Although through technologies such as Windows container hosting, Windows Subsystem for Linux, and Hyper-V allow for running Linux containers on a Windows host, they have different levels of complexity and management overhead, and in some situations, they are not practical.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The end result of all of this is that there is a need in the marketplace for a low-impact, easily-deployed, easily-updated container hosting solution for Linux containers on Windows hosts that supports orchestration.&amp;nbsp; This is especially true as we look at a solution like &lt;A href="https://azure.microsoft.com/products/iot-operations/" target="_blank" rel="noopener"&gt;Azure IoT Operations&lt;/A&gt;, which is the next-generation, Kubernetes-centric Azure IoT platform, but is also true for customers looking to move from the simplistic orchestration offered by the EFLOW offering to the more sophisticated orchestration offered by Kubernetes.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Besides bringing that to the table, AKS-EE builds on top of the standard k3s or k8s implementations, which means that popular Kubernetes management tools such as &lt;A href="https://k9scli.io/" target="_blank" rel="noopener"&gt;k9s&lt;/A&gt; can be used.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;It can be &lt;A href="https://learn.microsoft.com/azure/aks/hybrid/aks-edge-howto-connect-to-arc" target="_blank" rel="noopener"&gt;Azure Arc enabled&lt;/A&gt;, allowing centralized management of the solution in the Azure Portal, Azure PowerShell, or Azure CLI.&amp;nbsp; Azure Arc supports this through an &lt;EM&gt;outgoing&lt;/EM&gt; connection from the cluster to the Azure infrastructure, which means it’s possible to remotely manage the environment, including deploying workloads, collecting telemetry and metrics, and so on, without needing incoming access to the host or the cluster.&amp;nbsp; And, because it’s &lt;A href="https://learn.microsoft.com/azure/azure-arc/servers/prerequisites" target="_blank" rel="noopener"&gt;possible&lt;/A&gt; to manage Windows IoT Enterprise using Azure Arc, even the host can be connected to remotely, with centrally managed telemetry and updates (including AKS-EE through Microsoft Update).&amp;nbsp; This means that it’s possible to have an end-to-end centrally managed solution across a fleet of deployment locations, and it means an ISV can offer “management as a service”.&amp;nbsp; An IoT ISV can even offer packaged hardware offerings with Windows IoT Enterprise, AKS-EE, and their workload, all centrally managed through Azure Arc, which is an extremely compelling and powerful concept!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;What if I am an IoT Edge user using EFLOW today?&lt;/H2&gt;
&lt;P&gt;As you might be able to determine from the way I’ve presented AKS-EE, one possible way to think about AKS-EE is as a direct replacement for EFLOW in IoT Edge scenarios.&amp;nbsp; If you're looking at moving from EFLOW to a Kubernetes-based solution, AKS-EE is a great option to explore!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Conclusion&lt;/H2&gt;
&lt;P&gt;Hopefully, this short post gives you a better understanding of the “why” of AKS-EE as an offering and how it relates to some other offerings in the Microsoft space.&amp;nbsp; If you’re looking to evaluate AKS-EE, the next step would be to review the &lt;A href="https://learn.microsoft.com/azure/aks/hybrid/aks-edge-quickstart" target="_blank" rel="noopener"&gt;Quickstart guide&lt;/A&gt; to get started!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Looking forward, if you are interested in production AKS-EE architecture, &lt;A href="https://learn.microsoft.com/shows/azure-videos/fasttrack-for-azure-isvs-and-startups" target="_blank" rel="noopener"&gt;FastTrack ISV&lt;/A&gt; and &lt;A href="https://learn.microsoft.com/shows/azure-videos/fasttrack-for-azure" target="_blank" rel="noopener"&gt;FastTrack for Azure (Mainstream)&lt;/A&gt; have worked with multiple AKS-EE customers at this point, from single host deployments to multi-host scale-out deployments, including leveraging both the Linux and the Windows node capabilities of AKS-EE and leveraging the preview GPU support in the product.&amp;nbsp; Take a look at those sites to learn more about how we can help you with derisking your AKS-EE deployment, or help you decide if AKS-EE is in fact the right tool for you!&lt;/P&gt;</description>
      <pubDate>Tue, 26 Nov 2024 21:05:49 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-partners/aks-edge-essentials-a-lightweight-easy-button-for-linux/m-p/4136011#M214</guid>
      <dc:creator>MikeBazMSFT</dc:creator>
      <dc:date>2024-11-26T21:05:49Z</dc:date>
    </item>
    <item>
      <title>How to deploy a production-ready AKS cluster with Terraform verified module</title>
      <link>https://techcommunity.microsoft.com/t5/azure-partners/how-to-deploy-a-production-ready-aks-cluster-with-terraform/m-p/4122013#M213</link>
      <description>&lt;P&gt;Do you want to use Terraform to deploy an Azure Kubernetes Service (AKS) cluster that meets the production standards? We have a solution for you!&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We recently created a Terraform verified module for AKS that allows customers to deploy a production standard AKS cluster along with a Virtual Network and Azure container registry. It provisions an environment sufficient for most production deployments for AKS.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The module is available on the Terraform registry and can be found here.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You don't have to deal with the complexity of setting up an AKS cluster from the ground up. The module offers opinionated choices and reasonable default settings to deploy an AKS cluster ready for production.&lt;/P&gt;
&lt;H2&gt;What are Azure Verified Modules?&lt;/H2&gt;
&lt;P&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;&lt;EM&gt;Azure Verified Modules enable and accelerate consistent solution development and delivery of cloud-native or migrated applications and their supporting infrastructure by codifying&amp;nbsp;&lt;A title="Microsoft guidance (WAF)" href="https://learn.microsoft.com/en-us/azure/well-architected/" target="_blank" rel="noopener"&gt;Microsoft guidance (WAF)&lt;/A&gt;, with best practice configurations.&amp;nbsp;&lt;/EM&gt;For more information, please visit &lt;A title="Azure Verified Modules" href="https://azure.github.io/Azure-Verified-Modules/" target="_blank" rel="noopener"&gt;Azure Verified Modules&lt;/A&gt;.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2&gt;What does the module do?&lt;/H2&gt;
&lt;P&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;The module provisions the following resources:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;Azure Kubernetes Service (AKS) cluster for production workloads&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;Virtual Network&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;Azure Container Registry&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;To view the full list of resources and their configurations, please &lt;A href="https://registry.terraform.io/modules/Azure/avm-ptn-aks-production/azurerm/latest" target="_blank" rel="noopener"&gt;visit the module page&lt;/A&gt;.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2&gt;How to use the module&lt;/H2&gt;
&lt;P&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;To use the module, you need to have Terraform installed on your machine. If you don't have Terraform installed, you can download it from&amp;nbsp;&lt;A href="https://www.terraform.io/downloads.html" target="_blank" rel="noopener"&gt;their website here&lt;/A&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;Once you have Terraform installed, you can create a new Terraform configuration file and add the following code:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;module "avm-ptn-aks-production"  { 
  source  = "Azure/avm-ptn-aks-production/azurerm" 
  version = "0.1.0" 
  location = &amp;lt;region&amp;gt; 
  name = &amp;lt;cluster-name&amp;gt;  
  resource_group_name = &amp;lt;rg-name&amp;gt; 
  rbac_aad_admin_group_object_ids = ["11111111-2222-3333-4444-555555555555"]  
}&lt;/LI-CODE&gt;
&lt;P&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;To understand more about the variables and options available, have a look at &lt;A href="https://github.com/Azure/terraform-azurerm-avm-ptn-aks-production?tab=readme-ov-file#required-inputs" target="_blank" rel="noopener"&gt;the GitHub README&lt;/A&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;Running the module will provision the resources in your Azure subscription. You can view the resources in the Azure portal.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2&gt;How we built the module&lt;/H2&gt;
&lt;img /&gt;
&lt;P&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;This module is very opinionated and forces the user into a design that is ready for production. From the experience of supporting users deploying AKS with Terraform with the module "&lt;STRONG&gt;&lt;A title="Azure/aks/azurerm" href="https://registry.terraform.io/modules/Azure/aks/azurerm/latest" target="_blank" rel="noopener"&gt;Azure/aks/azurerm&lt;/A&gt;&lt;/STRONG&gt;", we proposed a much simpler module to help customers deploy scalable and reliable clusters.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;Here some of the important opinionated choices we made.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&lt;SPAN data-contrast="auto"&gt;Create user zonal node pools in all Availability Zones&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;When &lt;A href="https://learn.microsoft.com/en-us/azure/aks/availability-zones" target="_blank" rel="noopener"&gt;implementing availability zones&lt;/A&gt; with the cluster autoscaler, we recommend using a single node pool for each zone. The use of the "&lt;STRONG&gt;balance_similar_node_groups&lt;/STRONG&gt;" parameter enables a balanced distribution of nodes across zones for your workloads during scale up operations. When this approach isn't implemented, scale down operations can disrupt the balance of nodes across zones.&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;Leverage AKS automatic upgrades to keep the cluster secure and supported&lt;/H3&gt;
&lt;P&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;AKS has a&amp;nbsp;&lt;A title="fast release calendar" href="https://learn.microsoft.com/en-us/azure/aks/supported-kubernetes-versions" target="_blank" rel="noopener"&gt;fast release calendar.&lt;/A&gt;&amp;nbsp;It is important to keep the cluster on a supported version, and to get security patches quickly. We enforce the "&lt;STRONG&gt;patch&lt;/STRONG&gt;" automatic channel upgrade and the node image "&lt;STRONG&gt;node_os_channel_upgrade&lt;/STRONG&gt;" to keep the cluster up to date. It is a user's responsibility to plan Kubernetes minor version upgrades.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;Use Azure CNI Overlay for optimal and simple IP address space management&lt;/H3&gt;
&lt;P&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;There are many options when it comes to AKS networking. In most customer scenarios, &lt;A href="https://learn.microsoft.com/en-us/azure/aks/azure-cni-overlay?tabs=kubectl" target="_blank" rel="noopener"&gt;Azure CNI Overlay&lt;/A&gt; is the ideal solution. It is easy to plan IP address usage and it provides plenty of options to grow the cluster.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&lt;SPAN data-contrast="auto"&gt;Use Private Kubernetes API endpoint and Microsoft Entra authentication for enhanced security&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;We use a layered security approach to protect your Kubernetes API from being hacked. We keep the Kubernetes API safe by putting it in a private network, and we allow Microsoft Entra identities to authenticate (optional: and we turn off local accounts).&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;Bring your own network and force a User Assigned identity&lt;/H3&gt;
&lt;P&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;Customers scenarios often involve more than one single AKS cluster. The Azure VNet where these clusters exist should be part of a resource group controlled by the customer. Reusing the same User Assigned identity across a fleet of clusters, simplifies the role assignment operations. We wrote this module considering the integration in a real-world customer subscription, rather than considering the AKS cluster as a single isolated entity.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;Don't use any preview features&lt;/H3&gt;
&lt;P&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;To prevent breaking changes during production, we avoided the use of any preview features.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2&gt;Development of the module from a Terraform perspective&lt;/H2&gt;
&lt;P&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;The Azure Verified Module team worked to create effective pipelines for module development. For initial development you will need to fork an already prepared template and use that to develop your module. The template is&amp;nbsp;&lt;A href="https://github.com/Azure/terraform-azurerm-avm-template" target="_blank" rel="noopener"&gt;available on GitHub here&lt;/A&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;This ensures that all module developers are following the same standards and best practices. It also makes it easier to review and approve modules for publication and make any updates to the templates.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;The pipeline has in built checks to ensure that the module is following the best practices and standards. It provides a Docker container with all the necessary tools to run the checks locally and as well on GitHub Actions. The pipeline runs the following checks:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;Checks &lt;A title="linting standards" href="https://github.com/terraform-linters/tflint-ruleset-terraform/blob/v0.5.0/docs/rules/README.md" target="_blank" rel="noopener"&gt;linting standards&lt;/A&gt; that are set and&amp;nbsp;&lt;A title="best practices set by the AVM community" href="https://azure.github.io/Azure-Verified-Modules/specs/terraform/" target="_blank" rel="noopener"&gt;best practices set by the AVM community&lt;/A&gt;.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;Validates that the Terraform code is valid using "&lt;STRONG&gt;terraform validate&lt;/STRONG&gt;".&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;Run checks to update the readme if any changes are detected so that you don’t have to manually update them.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;The e2e tests only need you to give examples of the module's functionality and set up a test environment using GitHub to start them. You can see the &lt;A href="https://azure.github.io/Azure-Verified-Modules/contributing/terraform/terraform-contribution-flow/#42-run-e2e-tests" target="_blank" rel="noopener"&gt;steps on how to do this here&lt;/A&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN style="font-size: large;"&gt;For an end-to-end review of the contribution flow and how to setup your module for development using AVM scripts have a look at &lt;A href="https://azure.github.io/Azure-Verified-Modules/contributing/terraform/" target="_blank" rel="noopener"&gt;the Terraform Contribution Guide&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:720}"&gt;.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;Lessons learned&lt;/H3&gt;
&lt;UL&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;The AVM team provides the initial module template and GitHub actions pipeline to develop a new module. Using those resources and attending their office hours meeting enabled us to move faster. When building a new Terraform module for Azure, following the procedure to implement an AVM module saves you a lot of time, ensuring quality and avoiding common mistakes.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="4" data-aria-level="1"&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;It adds a lot of value to join the AVM team community calls or look out for the changes mentioned in &lt;A title="AVM GitHub repo" href="https://github.com/Azure/Azure-Verified-Modules" target="_blank" rel="noopener"&gt;AVM GitHub repo&lt;/A&gt;, to get updates on the latest changes, and to ask any questions you may have.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="5" data-aria-level="1"&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;When writing the design document, before starting development, make sure you address all edge cases. For example, not all Azure regions have availability zones, and the module must work in all Azure regions. Dealing with the details before starting the implementation helps to find good solutions without having to make bigger changes in the implementation phase.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;How can you contribute back&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="3" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="8" data-aria-level="1"&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;Have look at the &lt;A title="AVM team blog" href="https://techcommunity.microsoft.com/t5/azure-tools-blog/bg-p/AzureToolsBlog" target="_blank" rel="noopener"&gt;AVM team blog&lt;/A&gt; for updates from the AVM team.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="3" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="9" data-aria-level="1"&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;Help build a module.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="3" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="9" data-aria-level="1"&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;Share your learnings with the AVM team.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="3" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="9" data-aria-level="1"&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;&lt;A href="https://azure.github.io/Azure-Verified-Modules/resources/community/" target="_blank" rel="noopener"&gt;Join the regular external community calls&lt;/A&gt;.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&lt;STRONG&gt;Conclusion&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;If you face any challenges, please raise an issue on the repo - &lt;/SPAN&gt;&lt;A href="https://github.com/Azure/terraform-azurerm-avm-ptn-aks-production" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://github.com/Azure/terraform-azurerm-avm-ptn-aks-production&lt;/SPAN&gt;&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-size: large;"&gt;&lt;SPAN data-contrast="auto"&gt;We would also like to thank&amp;nbsp;&lt;STRONG&gt;Zijie He&lt;/STRONG&gt;&amp;nbsp;and&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;Jingwei Wang &lt;/STRONG&gt;for their huge contributions and collaboration whilst building this module.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 06 Nov 2024 12:12:21 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-partners/how-to-deploy-a-production-ready-aks-cluster-with-terraform/m-p/4122013#M213</guid>
      <dc:creator>NellyKiboi</dc:creator>
      <dc:date>2024-11-06T12:12:21Z</dc:date>
    </item>
    <item>
      <title>04 Azure Machine Learning and Custom Models</title>
      <link>https://techcommunity.microsoft.com/t5/azure-partners/04-azure-machine-learning-and-custom-models/m-p/4089273#M212</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Azure Machine Learning and Custom Models&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure Machine Learning (AML) is a powerful platform that caters to the needs of customers who already have their own models or are utilizing popular open-source models like SKLearn, PyTorch or XGBoost, and more. In such cases, customers are primarily focused on efficiently managing and tracking their models, along with centralized compute capabilities to support their data science scenarios. AML offers a comprehensive suite of features to meet these requirements, including compute resources, MLFlow integration for model tracking, a Model Registry for organized management, a Feature Store for sharing feature engineering artifacts, and the ability to deploy custom models to online or batch endpoints. With AML, customers can streamline their model management processes and unleash the full potential of their machine learning workflows.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Azure Machine Learning Compute Instance&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Azure Machine Learning offers a fully configured and managed development environment through compute instances. These instances not only serve as your dedicated development and testing environments but can also be utilized as training compute targets. With the ability to run multiple jobs in parallel and a convenient job queue, compute instances provide the flexibility and power you need to streamline your machine learning workflows.&lt;BR /&gt;&lt;BR /&gt;It's important to note that compute instances are designed for individual use and cannot be shared with other users in your workspace. Once created, a compute instance becomes a one-time process for your workspace, allowing you to reuse it as a development workstation or as a compute target for training. Furthermore, you have the option to attach multiple compute instances to your workspace, providing even more flexibility in your development setup.&lt;BR /&gt;&lt;BR /&gt;When it comes to resource allocation, the dedicated cores per region per VM family quota and total regional quota apply to compute instance creation. It's worth mentioning that stopping a compute instance does not release quota to ensure you can easily restart it when needed. Additionally, please note that it's not possible to change the virtual machine size of a compute instance once it has been created.&lt;BR /&gt;&lt;BR /&gt;Compute instances offer a convenient and powerful development environment in the cloud. Not only can you leverage compute instances for development, testing, and training, but you also have access to a range of popular tools to suit your preferences. From the compute instance, you can seamlessly access JupyterLab, Jupyter Notebook, and even VS Code, providing you with flexibility and choice in your coding experience. Whether you prefer the interactive and collaborative nature of Jupyter environments or the robust features of VS Code, Azure Machine Learning compute instances empower you to work efficiently and optimize your productivity. Unlock the full potential of your machine learning workflows with Azure Machine Learning compute instances and your preferred coding environment.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/how-to-manage-compute-instance?view=azureml-api-2&amp;amp;tabs=python" target="_blank" rel="noopener"&gt;Manage a compute instance - Azure Machine Learning | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/how-to-run-jupyter-notebooks?view=azureml-api-2" target="_blank" rel="noopener"&gt;Run Jupyter notebooks in your workspace - Azure Machine Learning | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Azure Machine Learning Compute Cluster&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;In addition to compute instances, Azure Machine Learning offers compute clusters, which provide a managed-compute infrastructure for creating single or multi-node compute environments. Unlike compute instances, compute clusters can be shared with other users in your workspace, promoting collaboration and resource optimization. The compute cluster dynamically scales up as jobs are submitted, ensuring efficient resource allocation. Moreover, compute clusters can be deployed within an Azure Virtual Network, offering enhanced security and control over your workloads. With the option for no public IP deployment and support for containerized environments, compute clusters simplify the execution of jobs while packaging your model dependencies within a Docker container. By leveraging Azure Machine Learning compute clusters, you can effortlessly scale your workloads, securely execute jobs, and streamline your machine learning workflows.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/how-to-create-attach-compute-cluster?view=azureml-api-2&amp;amp;tabs=python" target="_blank" rel="noopener"&gt;Create compute clusters - Azure Machine Learning | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Azure Machine Learning Serverless Compute&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Azure Machine Learning introduces a new compute target type, serverless compute, which revolutionizes the training process. With serverless compute, machine learning professionals can effortlessly submit their training jobs and let Azure Machine Learning handle the rest. As a fully managed and on-demand compute option, serverless compute takes care of creating, scaling, and managing the compute infrastructure, freeing you from the complexities of compute setup.&lt;BR /&gt;&lt;BR /&gt;By leveraging serverless compute, machine learning professionals can focus on their core expertise of building machine learning models, without the need to worry about compute infrastructure. You can easily specify the resources required for each job, while Azure Machine Learning takes care of managing the compute infrastructure and provides managed network isolation, reducing your workload.&lt;BR /&gt;&lt;BR /&gt;Enterprises can optimize costs by specifying the optimal resources for each job, while IT admins can maintain control by setting cores quota at the subscription and workspace level, and applying Azure policies.&lt;BR /&gt;&lt;BR /&gt;Serverless compute is not limited to model training. It can be used for various tasks, including fine-tuning models in the model catalog, running jobs from Azure Machine Learning studio, SDK, and CLI, building environment images, and enabling responsible AI dashboard scenarios. Serverless jobs consume the same quota as Azure Machine Learning compute, providing flexibility in resource allocation. You can choose between standard (dedicated) or spot (low priority) VMs, and serverless jobs support both managed identity and user identity. The billing model for serverless compute aligns with Azure Machine Learning compute.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&amp;amp;tabs=python" target="_blank" rel="noopener"&gt;Model training on serverless compute - Azure Machine Learning | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Azure Machine Learning Custom Model Training &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Azure Machine Learning empowers you to train custom models with ease, giving you the flexibility to choose between running them on your personal compute or utilizing an Azure Machine Learning compute instance or compute cluster. The process remains the same as if you were working in an isolated environment, but with Azure Machine Learning, you have the added advantage of tracking with MLFlow, running hyperparameter tuning, training, and testing on scalable AML compute.&lt;BR /&gt;&lt;BR /&gt;To guide you through the process of training SciKitLearn, TensorFlow, Keras, and PyTorch models, we have curated a collection of helpful resources. These links provide step-by-step instructions and best practices to ensure your custom model training journey is smooth and successful. Whether you are a seasoned data scientist or a machine learning enthusiast, Azure Machine Learning simplifies the process, allowing you to focus on refining your models and achieving optimal performance.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/how-to-train-scikit-learn?view=azureml-api-2" target="_blank" rel="noopener"&gt;Train scikit-learn machine learning models (v2) - Azure Machine Learning | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/how-to-train-tensorflow?view=azureml-api-2" target="_blank" rel="noopener"&gt;Train and deploy a TensorFlow model (SDK v2) - Azure Machine Learning | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/how-to-train-keras?view=azureml-api-2" target="_blank" rel="noopener"&gt;Train deep learning Keras models (SDK v2) - Azure Machine Learning | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/how-to-train-pytorch?view=azureml-api-2" target="_blank" rel="noopener"&gt;Train deep learning PyTorch models (SDK v2) - Azure Machine Learning | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters?view=azureml-api-2" target="_blank" rel="noopener"&gt;Hyperparameter tuning a model (v2) - Azure Machine Learning | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/how-to-deploy-mlflow-models?view=azureml-api-2&amp;amp;tabs=azureml#deploying-mlflow-models-vs-custom-models" target="_blank" rel="noopener"&gt;Guidelines for deploying MLflow models - Azure Machine Learning | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Azure Machine Learning Model Registry &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;After you model is trained Azure Machine Learning seamlessly integrates with MLflow, providing users with a powerful tool for model management. By leveraging MLflow, you can easily support the entire model lifecycle, making it a convenient choice for those already familiar with the MLflow client.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/how-to-manage-models-mlflow?view=azureml-api-2" target="_blank" rel="noopener"&gt;Manage models registries in Azure Machine Learning with MLflow - Azure Machine Learning | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Deploy a Custom Model &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;With Azure Machine Learning, you can seamlessly deploy your MLflow model to an online endpoint, enabling real-time inference. The beauty of this deployment approach is its no-code characteristic, eliminating the need to specify a scoring script or environment. When deploying your MLflow model to an online endpoint, Azure Machine Learning dynamically installs the necessary Python packages from the conda.yaml file during container runtime. Additionally, Azure Machine Learning provides a curated environment and MLflow base image, which includes essential components such as azureml-inference-server-http and mlflow-skinny. To facilitate inference, a scoring script is also included.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/how-to-deploy-mlflow-models-online-endpoints?view=azureml-api-2&amp;amp;tabs=cli" target="_blank" rel="noopener"&gt;Deploy MLflow models to real-time endpoints - Azure Machine Learning | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/mlflow_sdk_online_endpoints.ipynb" target="_self"&gt;Deploy MLflow models to Online Endpoints&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/sqlshep/PredictiveMaintenance/tree/main/XGBoost" target="_self"&gt;Predictive Maintenance with MLFlow(XGBosot)&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/sqlshep/PredictiveMaintenance/tree/main/LSTM" target="_self"&gt;Predictive Maintenance with MLFlow(PyTorch)&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Principal author:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN&gt;Shep Sheppard&amp;nbsp;&lt;/SPAN&gt;| Senior Customer Engineer, FastTrack for ISV and Startups&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Other contributors:&lt;/P&gt;
&lt;UL&gt;
&lt;LI class="p1"&gt;Yoav Dobrin Principal Customer Engineer, FastTrack for ISV and Startups&lt;/LI&gt;
&lt;LI class="p1"&gt;Jones Jebaraj | Senior Customer Engineer, FastTrack for ISV and Startups&lt;/LI&gt;
&lt;LI class="p1"&gt;Olga Molocenco-Ciureanu | Customer Engineer, FastTrack for ISV and Startups&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 01 May 2024 14:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-partners/04-azure-machine-learning-and-custom-models/m-p/4089273#M212</guid>
      <dc:creator>shepsheppard</dc:creator>
      <dc:date>2024-05-01T14:00:00Z</dc:date>
    </item>
    <item>
      <title>03 Azure Machine Learning and OSS Model Fine tuning</title>
      <link>https://techcommunity.microsoft.com/t5/azure-partners/03-azure-machine-learning-and-oss-model-fine-tuning/m-p/4089261#M211</link>
      <description>&lt;P&gt;Hyperparameter optimization, also known as hyperparameter tuning, is a fundamental challenge in the field of machine learning. It involves the selection of an optimal set of hyperparameters for a given learning algorithm. Hyperparameters are parameters that dictate the behavior of the learning process, while other parameters, such as node weights, are learned from the data.&lt;BR /&gt;&lt;BR /&gt;A machine learning model can often require different constraints, weights, or learning rates to effectively capture diverse data patterns. These adjustable measures, known as hyperparameters, must be carefully tuned to ensure that the model can successfully solve the machine learning problem at hand. Hyperparameter optimization seeks to find a combination of hyperparameters that yields an optimal model, minimizing a predefined loss function on independent data.&lt;BR /&gt;&lt;BR /&gt;To achieve this, an objective function is utilized, which takes a set of hyperparameters as input and returns the corresponding loss. The goal is to find the set of hyperparameters that maximizes the generalization performance of the model. Cross-validation is commonly employed to estimate this performance and aid in the selection of optimal hyperparameter values. By maximizing the generalization performance, hyperparameter optimization plays a crucial role in enhancing the overall effectiveness and accuracy of machine learning models.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this post we will cover the open-source tools and Azure Machine learning tools around hyperparameter tuning.&amp;nbsp; There are three main techniques used for hyperparameter, Grid Search, Random Search, Bayesian Search and more commonly used for Neural Networks, gradient based optimization.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Grid Search &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;In the realm of hyperparameter optimization, the conventional approach has been to employ grid search or parameter sweep. This technique involves exhaustively exploring a predetermined subset of hyperparameters for a learning algorithm. To guide the grid search algorithm, a performance metric is selected, often determined through cross-validation on the training set or evaluation on a dedicated validation set.&lt;BR /&gt;&lt;BR /&gt;Grid search operates by systematically testing different combinations of hyperparameters within the defined subset. This method, however, can become computationally expensive, especially when dealing with a large number of hyperparameters or a wide range of possible values. Despite its limitations, grid search remains widely used due to its simplicity and interpretability.&lt;BR /&gt;&lt;BR /&gt;During the grid search process, various performance metrics are measured for each combination of hyperparameters. These metrics aid in assessing the model's effectiveness and allow for the identification of hyperparameter configurations that lead to optimal performance. By evaluating the model's performance on either the training set or a separate validation set, grid search facilitates the selection of the most appropriate hyperparameter values.&lt;BR /&gt;&lt;BR /&gt;While grid search has proven to be a valuable technique, alternative methods have emerged to address its shortcomings, such as the computationally efficient and automated approaches of Bayesian optimization and random search. These advanced methods provide more sophisticated ways to explore the hyperparameter space and discover optimal configurations, revolutionizing the field of hyperparameter optimization.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html" target="_blank"&gt;sklearn.model_selection.GridSearchCV — scikit-learn 1.4.2 documentation&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Random Search&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Random Search offers an alternative to the exhaustive enumeration of all possible combinations of hyperparameters by randomly selecting them. This approach can be applied not only to discrete settings but also to continuous and mixed spaces, providing greater flexibility. Random Search has been found to outperform Grid Search, particularly in scenarios where only a small number of hyperparameters significantly impact the final performance of the machine learning algorithm.&lt;BR /&gt;&lt;BR /&gt;In cases where the optimization problem exhibits a low intrinsic dimensionality, Random Search proves to be particularly effective. This refers to situations where the hyperparameters' interdependencies are limited, allowing for a more efficient exploration of the hyperparameter space. Moreover, Random Search lends itself to embarrassingly parallel implementation, meaning that it can be easily distributed across multiple computing resources for faster processing.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;One of the advantages of Random Search is its ability to incorporate prior knowledge by specifying the distribution from which to sample hyperparameters. This enables domain experts to guide the search process based on their understanding of the problem at hand. Despite its simplicity, Random Search remains a significant baseline against which new hyperparameter optimization methods can be compared.&lt;BR /&gt;&lt;BR /&gt;While Random Search has been instrumental in advancing hyperparameter optimization, it is important to note that other sophisticated techniques, such as Bayesian optimization, have emerged as promising alternatives. These methods leverage probabilistic models to intelligently explore the hyperparameter space and efficiently find optimal configurations. The continuous development of new approaches continues to enhance the field of hyperparameter optimization, offering exciting opportunities for improving the performance and efficiency of machine learning models.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html" target="_blank"&gt;sklearn.model_selection.RandomizedSearchCV — scikit-learn 1.4.2 documentation&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Bayesian Search&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Bayesian optimization is a powerful method for globally optimizing noisy black-box functions. When applied to hyperparameter optimization, Bayesian optimization constructs a probabilistic model that captures the relationship between hyperparameter values, and the objective function evaluated on a validation set. Through an iterative process, Bayesian optimization intelligently selects hyperparameter configurations based on the current model, evaluates their performance, and updates the model to gather valuable information about the function and, more importantly, the location of the optimum.&lt;BR /&gt;&lt;BR /&gt;The key idea behind Bayesian optimization is to strike a balance between exploration and exploitation. Exploration involves selecting hyperparameters that yield uncertain outcomes, while exploitation focuses on hyperparameters that are expected to be close to the optimum. By carefully navigating this trade-off, Bayesian optimization effectively explores the hyperparameter space, gradually narrowing down the search to regions with higher potential for optimal performance.&lt;BR /&gt;&lt;BR /&gt;In practice, Bayesian optimization has demonstrated superior performance compared to traditional methods such as grid search and random search. This advantage stems from its ability to reason about the quality of experiments before actually running them. By leveraging the probabilistic model, Bayesian optimization can make informed decisions about which hyperparameter configurations are most likely to lead to better results, thereby reducing the number of evaluations required.&lt;BR /&gt;&lt;BR /&gt;The efficiency and effectiveness of Bayesian optimization have been widely observed in various domains. Researchers and practitioners have embraced this approach due to its ability to achieve better outcomes with fewer evaluations, making it an invaluable tool for hyperparameter optimization.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://scikit-optimize.github.io/stable/modules/generated/skopt.BayesSearchCV.html" target="_blank"&gt;skopt.BayesSearchCV — scikit-optimize 0.8.1 documentation&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://en.wikipedia.org/wiki/Hyperparameter_optimization" target="_blank"&gt;Hyperparameter optimization - Wikipedia&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Azure &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Having familiarized ourselves with the foundational aspects of hyperparameter tuning, a natural inquiry arises: Does Azure Machine Learning (AML) provide support for hyperparameter tuning? The answer to this question is undoubtedly affirmative, as AML offers comprehensive hyperparameter tuning capabilities. Detailed documentation on this topic can be accessed in the link below, providing users with valuable guidance and information.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters?view=azureml-api-2" target="_blank"&gt;Hyperparameter tuning a model (v2) - Azure Machine Learning | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;AML, through its Python SDK V2, facilitates hyperparameter tuning by offering three distinct algorithms: Grid, Random, and Bayesian. These algorithms empower users to effectively explore the hyperparameter search space and optimize their machine learning models. To leverage the hyperparameter tuning capabilities in AML, the following essential steps can be followed:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Define the parameter search space for your trial: Specify the range and feasible values for each hyperparameter that will undergo tuning.&lt;/LI&gt;
&lt;LI&gt;Specify the sampling algorithm for your sweep job: Select the desired algorithm that will be employed to sample hyperparameter configurations during the tuning process.&lt;/LI&gt;
&lt;LI&gt;Specify the objective to optimize: Define the performance metric or objective function that will be utilized to evaluate and compare the various hyperparameter configurations.&lt;/LI&gt;
&lt;LI&gt;Specify an early termination policy for low-performing jobs: Establish criteria that will automatically terminate underperforming jobs during the hyperparameter tuning process.&lt;/LI&gt;
&lt;LI&gt;Define limits for the sweep job: Set the maximum number of iterations or allocate resources according to your requirements for the hyperparameter tuning experiment.&lt;/LI&gt;
&lt;LI&gt;Launch an experiment with the defined configuration: Initiate the hyperparameter tuning experiment by utilizing the specified settings and parameters.&lt;/LI&gt;
&lt;LI&gt;Visualize the training jobs: Monitor and analyze the progress and outcomes of the hyperparameter tuning experiment, including the performance of individual training jobs.&lt;/LI&gt;
&lt;LI&gt;Select the best configuration for your model: Upon completion of the hyperparameter tuning experiment, identify the hyperparameter configuration that yielded the most favorable performance, and incorporate it into your machine learning model.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Principal author:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN&gt;Shep Sheppard&amp;nbsp;&lt;/SPAN&gt;| Senior Customer Engineer, FastTrack for ISV and Startups&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Other contributors:&lt;/P&gt;
&lt;UL&gt;
&lt;LI class="p1"&gt;Yoav Dobrin Principal Customer Engineer, FastTrack for ISV and Startups&lt;/LI&gt;
&lt;LI class="p1"&gt;Jones Jebaraj | Senior Customer Engineer, FastTrack for ISV and Startups&lt;/LI&gt;
&lt;LI class="p1"&gt;Olga Molocenco-Ciureanu | Customer Engineer, FastTrack for ISV and Startups&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Mon, 29 Apr 2024 20:44:22 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-partners/03-azure-machine-learning-and-oss-model-fine-tuning/m-p/4089261#M211</guid>
      <dc:creator>shepsheppard</dc:creator>
      <dc:date>2024-04-29T20:44:22Z</dc:date>
    </item>
    <item>
      <title>Azure Orphan Resources Grafana Dashboard</title>
      <link>https://techcommunity.microsoft.com/t5/azure-partners/azure-orphan-resources-grafana-dashboard/m-p/4120303#M210</link>
      <description>&lt;P&gt;In cloud computing, it is crucial to follow best practices when building a reliable, high-performing, and secure environment. However, it is equally important to implement a strategy aimed at reducing the total cost of ownership. In this context, this Grafana dashboard offers a centralized view of Azure orphan resources that can be safely removed. By identifying and removing these unnecessary resources, you can effectively decrease the overall cost associated with maintaining their Azure subscriptions and increase the operational efficiency. You can find the Grafana dashboard under this &lt;A href="https://github.com/Azure-Samples/azure-orphan-resources-grafana-dashboard" target="_self"&gt;GitHub repository&lt;/A&gt;.&lt;/P&gt;
&lt;DIV id="tinyMceEditorpaolosalvatori_9" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt; &lt;/P&gt;
&lt;P&gt; &lt;/P&gt;
&lt;P&gt;This dashboard is influenced by the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/dolevshor/azure-orphan-resources/tree/main" target="_blank" rel="noopener"&gt;Azure Orphaned Resources 2.0&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;project developed by my colleague Dolev Shor. It incorporates and integrates some of the queries he designed for his Azure workbook, which can be created and utilized within the Azure Portal. You can refer to the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-monitor/visualize/workbooks-overview" target="_blank" rel="nofollow noopener"&gt;Azure workbook documentation&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;to learn more about creating and utilizing workbooks in the Azure Portal.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H2 class="heading-element" dir="auto" tabindex="-1"&gt;Prerequisites&lt;/H2&gt;
&lt;A id="user-content-prerequisites" class="anchor" href="https://github.com/Azure-Samples/azure-orphan-resources-grafana-dashboard#prerequisites" target="_blank" rel="noopener" aria-label="Permalink: Prerequisites"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;You can host the Grafana dashboard in Azure Managed Grafana, your own Grafana installation in an AKS cluster, or any Kubernetes cluster with access to the public internet.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H2 class="heading-element" dir="auto" tabindex="-1"&gt;Implementation&lt;/H2&gt;
&lt;A id="user-content-implementation" class="anchor" href="https://github.com/Azure-Samples/azure-orphan-resources-grafana-dashboard#implementation" target="_blank" rel="noopener" aria-label="Permalink: Implementation"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;The dashboard performs a series of queries using the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/" target="_blank" rel="nofollow noopener"&gt;Kusto Query Language&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/governance/resource-graph/overview" target="_blank" rel="nofollow noopener"&gt;Azure Resource Graph&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;to individuate unused, orphan resources that can be safely removed from your Azure subscriptions without impacting the operability of your cloud hosted workloads. Azure Resource Graph is an Azure service designed to extend Azure Resource Management by providing efficient and performant resource exploration with the ability to query at scale across a given set of subscriptions so that you can effectively govern your environment.&lt;/P&gt;
&lt;P&gt;For more information Azure Resource Graph, you can refer to the following links:&lt;/P&gt;
&lt;UL dir="auto"&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/governance/resource-graph/overview" target="_blank" rel="nofollow noopener"&gt;Azure Resource Graph Overview&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/governance/resource-graph/how-to/get-resource-changes" target="_blank" rel="nofollow noopener"&gt;Query Resource Changes&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Here is the list of the resources currently supported by the dashboard:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL dir="auto"&gt;
&lt;LI&gt;App Service Plans&lt;/LI&gt;
&lt;LI&gt;App Service Environments&lt;/LI&gt;
&lt;LI&gt;Availability Sets&lt;/LI&gt;
&lt;LI&gt;Managed Disks&lt;/LI&gt;
&lt;LI&gt;Load Balancers&lt;/LI&gt;
&lt;LI&gt;Route Tables&lt;/LI&gt;
&lt;LI&gt;Application Gateways&lt;/LI&gt;
&lt;LI&gt;Application Gateway WAF Policies&lt;/LI&gt;
&lt;LI&gt;Front Door WAF Policies&lt;/LI&gt;
&lt;LI&gt;Traffic Manager Profiles&lt;/LI&gt;
&lt;LI&gt;Virtual Networks&lt;/LI&gt;
&lt;LI&gt;Subnets&lt;/LI&gt;
&lt;LI&gt;Network Interfaces&lt;/LI&gt;
&lt;LI&gt;Virtual Network Gateways&lt;/LI&gt;
&lt;LI&gt;Network Security Groups&lt;/LI&gt;
&lt;LI&gt;NAT Gateways&lt;/LI&gt;
&lt;LI&gt;Public IP Addresses&lt;/LI&gt;
&lt;LI&gt;Public IP Prefixes&lt;/LI&gt;
&lt;LI&gt;IP Groups&lt;/LI&gt;
&lt;LI&gt;Private DNS Zones&lt;/LI&gt;
&lt;LI&gt;Private Endpoints&lt;/LI&gt;
&lt;LI&gt;Private Link Services&lt;/LI&gt;
&lt;LI&gt;SQL Elastic Pools&lt;/LI&gt;
&lt;LI&gt;Resource Groups&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Please note that all the resources mentioned above come with an associated cost. Some resources like Availability Sets, Route Tables, Subnets, IP Groups, and Resource Groups are available free of charge.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;H2 class="heading-element" dir="auto" tabindex="-1"&gt;Importing the dashboard into Azure Managed Grafana&lt;/H2&gt;
&lt;A id="user-content-importing-the-dashboard-into-azure-managed-grafana" class="anchor" href="https://github.com/Azure-Samples/azure-orphan-resources-grafana-dashboard#importing-the-dashboard-into-azure-managed-grafana" target="_blank" rel="noopener" aria-label="Permalink: Importing the dashboard into Azure Managed Grafana"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;To import the dashboard into&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/managed-grafana/overview" target="_blank" rel="nofollow noopener"&gt;Azure Managed Grafana&lt;/A&gt;, follow these steps:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL dir="auto"&gt;
&lt;LI&gt;Go to the Azure Portal and navigate to your Azure Managed Grafana resource.&lt;/LI&gt;
&lt;LI&gt;Click&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Identity&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;under&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Settings&lt;/CODE&gt;.&lt;/LI&gt;
&lt;LI&gt;Ensure that the system-assigned managed identity is enabled.&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;
&lt;DIV id="tinyMceEditorpaolosalvatori_12" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;img /&gt;&lt;/LI&gt;
&lt;LI&gt;Click on the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Azure role assignments&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;button.&lt;/LI&gt;
&lt;LI&gt;Assign the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-monitor/roles-permissions-security#monitoring-reader" target="_blank" rel="nofollow noopener"&gt;Monitoring Reader&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;role to the Grafana managed identity, scoped to your Azure subscription or Management Group.&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;
&lt;DIV id="tinyMceEditorpaolosalvatori_13" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;img /&gt;&lt;/LI&gt;
&lt;LI&gt;Click on the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Endpoint&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;URL on the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Overview&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;page of your Azure Managed Grafana resource.&lt;/LI&gt;
&lt;LI&gt;In the Grafana dashboard, go to&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Connections&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and ensure that you have an&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Azure Monitor&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;datasource. If not, create one and select&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Managed Identity&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;as the authentication mechanism.&lt;/LI&gt;
&lt;LI&gt;Click on the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Load subscriptions&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;button to test the data source.&lt;/LI&gt;
&lt;LI&gt;Go to&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Dashboards&lt;/CODE&gt;, click on&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;New&lt;/CODE&gt;, and then select&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Import&lt;/CODE&gt;.&lt;/LI&gt;
&lt;LI&gt;Upload the dashboard JSON file or copy and paste the JSON code into the textbox, then click the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Load&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;button.&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;
&lt;DIV id="tinyMceEditorpaolosalvatori_14" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;img /&gt;&lt;/LI&gt;
&lt;LI&gt;Choose a category for the dashboard and click the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Import&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;button.&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/Azure-Samples/azure-orphan-resources-grafana-dashboard/blob/main/images/managed-grafana-import-dashboard-02.png" target="_blank" rel="noopener"&gt;Upload Dashboard to Azure Managed Grafana&lt;/A&gt;&lt;img /&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;DIV class="markdown-heading" dir="auto"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 class="heading-element" dir="auto" tabindex="-1"&gt;Importing the Dashboard into a Bring Your Own (BYO) Grafana Installation&lt;/H2&gt;
&lt;A id="user-content-importing-the-dashboard-into-a-bring-your-own-byo-grafana-installation" class="anchor" href="https://github.com/Azure-Samples/azure-orphan-resources-grafana-dashboard#importing-the-dashboard-into-a-bring-your-own-byo-grafana-installation" target="_blank" rel="noopener" aria-label="Permalink: Importing the Dashboard into a Bring Your Own (BYO) Grafana Installation"&gt;&lt;/A&gt;&lt;/DIV&gt;
&lt;P&gt;Before importing the dashboard into your own Grafana installation, you need to create a service principal under your Microsoft Azure AD account and assign the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-monitor/roles-permissions-security#monitoring-reader" target="_blank" rel="nofollow noopener"&gt;Monitoring Reader&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;role to it. Once done, follow these steps:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL dir="auto"&gt;
&lt;LI&gt;In the Grafana dashboard, go to&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Connections&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and ensure that you have an&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Azure Monitor&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;datasource. If not, create one and specify the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;tenant id&lt;/CODE&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;client Id&lt;/CODE&gt;, and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;client secret&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;of your service princiapl as shown in the following picture:&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;
&lt;DIV id="tinyMceEditorpaolosalvatori_15" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;img /&gt;&lt;/LI&gt;
&lt;LI&gt;Click on the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Load subscriptions&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;button to test the data source.&lt;/LI&gt;
&lt;LI&gt;Go to&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Dashboards&lt;/CODE&gt;, click on&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;New&lt;/CODE&gt;, and then select&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Import&lt;/CODE&gt;.&lt;/LI&gt;
&lt;LI&gt;Upload the dashboard JSON file or copy and paste the JSON code into the textbox, then click the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Load&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;button.&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;
&lt;DIV id="tinyMceEditorpaolosalvatori_16" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;img /&gt;&lt;/LI&gt;
&lt;LI&gt;Choose a category for the dashboard and click the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Import&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;button.&lt;BR /&gt;&lt;img /&gt;
&lt;P&gt; &lt;/P&gt;
&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;
&lt;DIV id="tinyMceEditorpaolosalvatori_17" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;/OL&gt;</description>
      <pubDate>Mon, 22 Apr 2024 16:50:09 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-partners/azure-orphan-resources-grafana-dashboard/m-p/4120303#M210</guid>
      <dc:creator>paolosalvatori</dc:creator>
      <dc:date>2024-04-22T16:50:09Z</dc:date>
    </item>
    <item>
      <title>02 Model and capability evaluation (pre-fab, OSS, fine-tuning, bespoke training)</title>
      <link>https://techcommunity.microsoft.com/t5/azure-partners/02-model-and-capability-evaluation-pre-fab-oss-fine-tuning/m-p/4089268#M209</link>
      <description>&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Azure Machine Learning&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;When you start using Azure Machine Learning (AML), you have many options to choose from. One option is Azure Machine Learning Designer, which is easy to use and doesn't require coding. Another option is Azure Machine Learning Automated Machine Learning, which uses Python. You can also use the AML Studio, which has a graphical user interface. If you prefer using open-source models like ScikitLearn or Tensorflow, you can easily integrate them into AML. AML also has MLFlow, which helps you track and monitor your models. If you want to deploy a custom model, AML provides tools and resources to make it easier. Azure Machine Learning gives you lots of choices to find the best approach for your needs and preferences.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Azure Machine Learning Designer&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Let's start with Azure Machine Learning designer, a useful tool for data science. It has a simple interface where you can easily drag and drop datasets from different sources like Azure Blob Storage, Azure Data Lake Storage, Azure SQL, or local files. You can also preview and visualize the data with just one click. It offers various built-in modules to preprocess data and do feature engineering. You can build and train machine learning models using advanced algorithms for computer vision, text analytics, recommendations, and anomaly detection. You have the option to use pre-built models or customize them with Python and R code. You can execute machine learning pipelines interactively, cross-validate models, and visualize performance. Troubleshooting and debugging are made easier with graphs, log previews, and outputs. Deploying models for real-time and batch inferencing is streamlined, and models and assets are securely stored in a central registry for tracking and lineage. With Azure Machine Learning designer, you have endless possibilities to achieve data-driven success.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;AML Designer Model Selection&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;When it comes to data science, one common question is, "Which machine learning algorithm should I use?" The answer depends on two important factors. First, you need to understand what you want to achieve with your data. This means identifying the business question you want to answer by analyzing historical data. Second, you need to evaluate the specific requirements of your data science scenario. This includes considering accuracy, training time, linearity, number of parameters, and number of features that your solution can handle. By considering these factors, you can make a well-informed decision about the best machine learning algorithm for your situation.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="TextRun Highlight SCXW173275351 BCX8" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW173275351 BCX8" data-ccp-parastyle="header"&gt;To help you figure out what you want to do with your data, you can use the Azure Machine Learning Algorithm Cheat Sheet. &lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW173275351 BCX8" data-ccp-parastyle="header"&gt;It's&lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW173275351 BCX8" data-ccp-parastyle="header"&gt; a useful resource that helps you find the right machine learning model for your predictive analytics solution. The Azure Machine Learning Designer offers a wide range of algorithms, such as Multiclass Decision Forest, Recommendation systems, Neural Network Regression, Multiclass Neural Network, and K-Means Clustering. Each algorithm is designed to solve specific machine learning problems. You can find a comprehensive list of these algorithms and detailed documentation on their functionality and parameter optimization in the Machine Learning Designer algorithm and &lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW173275351 BCX8" data-ccp-parastyle="header"&gt;component&lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW173275351 BCX8" data-ccp-parastyle="header"&gt; reference. &lt;/SPAN&gt; &lt;/SPAN&gt;&lt;A class="Hyperlink SCXW173275351 BCX8" href="https://learn.microsoft.com/en-us/azure/machine-learning/media/algorithm-cheat-sheet/machine-learning-algorithm-cheat-sheet.png?view=azureml-api-1#lightbox" target="_blank" rel="noreferrer noopener"&gt;&lt;SPAN class="TextRun Underlined SCXW173275351 BCX8" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW173275351 BCX8" data-ccp-charstyle="Hyperlink"&gt;https://learn.microsoft.com/en-us/azure/machine-learning/media/algorithm-cheat-sheet/machine-learning-algorithm-cheat-sheet.png?view=azureml-api-1#lightbox&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN class="EOP SCXW173275351 BCX8" data-ccp-props="{&amp;quot;469777462&amp;quot;:[4680,9360],&amp;quot;469777927&amp;quot;:[0,0],&amp;quot;469777928&amp;quot;:[3,4]}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE style="font-weight: 400;" data-tablestyle="MsoNormalTable" data-tablelook="1184" aria-rowcount="20"&gt;
&lt;TBODY&gt;
&lt;TR aria-rowindex="1"&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Algorithm&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Accuracy&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Training time&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Linearity&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Parameters&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Notes&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR aria-rowindex="2"&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Classification family&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR aria-rowindex="3"&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/algorithm-module-reference/two-class-logistic-regression?WT.mc_id=docs-article-lazzeri&amp;amp;view=azureml-api-1" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Two-Class logistic regression&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Good&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Fast&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Yes&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;4&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR aria-rowindex="4"&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/algorithm-module-reference/two-class-decision-forest?WT.mc_id=docs-article-lazzeri&amp;amp;view=azureml-api-1" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Two-class decision forest&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Excellent&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Moderate&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;No&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;5&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Shows slower scoring times. Suggest not working with One-vs-All Multiclass.&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR aria-rowindex="5"&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/algorithm-module-reference/two-class-boosted-decision-tree?WT.mc_id=docs-article-lazzeri&amp;amp;view=azureml-api-1" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Two-class boosted decision tree&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Excellent&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Moderate&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;No&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;6&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Large memory footprint&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR aria-rowindex="6"&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/algorithm-module-reference/two-class-neural-network?WT.mc_id=docs-article-lazzeri&amp;amp;view=azureml-api-1" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Two-class neural network&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Good&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Moderate&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;No&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;8&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR aria-rowindex="7"&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/algorithm-module-reference/two-class-averaged-perceptron?WT.mc_id=docs-article-lazzeri&amp;amp;view=azureml-api-1" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Two-class averaged perceptron&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Good&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Moderate&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Yes&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;4&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR aria-rowindex="8"&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/algorithm-module-reference/two-class-support-vector-machine?WT.mc_id=docs-article-lazzeri&amp;amp;view=azureml-api-1" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Two-class support vector machine&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Good&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Fast&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Yes&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;5&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Good for large feature sets&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR aria-rowindex="9"&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/algorithm-module-reference/multiclass-logistic-regression?WT.mc_id=docs-article-lazzeri&amp;amp;view=azureml-api-1" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Multiclass logistic regression&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Good&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Fast&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Yes&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;4&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR aria-rowindex="10"&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/algorithm-module-reference/multiclass-decision-forest?WT.mc_id=docs-article-lazzeri&amp;amp;view=azureml-api-1" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Multiclass decision forest&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Excellent&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Moderate&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;No&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;5&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Shows slower scoring times&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR aria-rowindex="11"&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/algorithm-module-reference/multiclass-boosted-decision-tree?WT.mc_id=docs-article-lazzeri&amp;amp;view=azureml-api-1" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Multiclass boosted decision tree&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Excellent&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Moderate&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;No&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;6&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Tends to improve accuracy with some small risk of less coverage&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR aria-rowindex="12"&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/algorithm-module-reference/multiclass-neural-network?WT.mc_id=docs-article-lazzeri&amp;amp;view=azureml-api-1" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Multiclass neural network&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Good&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Moderate&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;No&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;8&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR aria-rowindex="13"&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/algorithm-module-reference/one-vs-all-multiclass?WT.mc_id=docs-article-lazzeri&amp;amp;view=azureml-api-1" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;One-vs-all multiclass&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;-&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;-&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;-&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;-&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;See properties of the two-class method selected&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR aria-rowindex="14"&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Regression family&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR aria-rowindex="15"&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/algorithm-module-reference/linear-regression?WT.mc_id=docs-article-lazzeri&amp;amp;view=azureml-api-1" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Linear regression&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Good&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Fast&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Yes&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;4&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR aria-rowindex="16"&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/algorithm-module-reference/decision-forest-regression?WT.mc_id=docs-article-lazzeri&amp;amp;view=azureml-api-1" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Decision forest regression&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Excellent&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Moderate&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;No&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;5&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR aria-rowindex="17"&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/algorithm-module-reference/boosted-decision-tree-regression?WT.mc_id=docs-article-lazzeri&amp;amp;view=azureml-api-1" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Boosted decision tree regression&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Excellent&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Moderate&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;No&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;6&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Large memory footprint&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR aria-rowindex="18"&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/algorithm-module-reference/neural-network-regression?WT.mc_id=docs-article-lazzeri&amp;amp;view=azureml-api-1" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Neural network regression&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Good&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Moderate&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;No&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;8&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR aria-rowindex="19"&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Clustering family&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR aria-rowindex="20"&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/algorithm-module-reference/k-means-clustering?WT.mc_id=docs-article-lazzeri&amp;amp;view=azureml-api-1" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;K-means clustering&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Excellent&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Moderate&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Yes&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;8&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;A clustering algorithm&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Azure Machine Learning Designer Evaluation&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;In this section, we will provide an overview of the metrics available for evaluating different types of models in the Evaluate Model framework. This includes classification models, regression models, and clustering models. By understanding the specific metrics for each model type, we can better assess their performance. Whether you need to evaluate the accuracy of a regression model, the precision and recall of a classification model, or the clustering quality of a clustering model, this section will help you effectively analyze and evaluate your model results.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Evaluation of Classification Models&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;When evaluating binary classification models, a set of crucial metrics are reported to assess their performance accurately. Accuracy, the first metric, gauges the effectiveness of a classification model by measuring the proportion of true results in relation to the total number of cases examined. Precision, on the other hand, quantifies the ratio of true positive results to all positive results, providing insights into the model's ability to correctly identify positive instances. Recall, the third metric, calculates the fraction of relevant instances that are accurately retrieved by the model. The F1 score, a weighted average of precision and recall, offers a comprehensive evaluation of the model's performance, with a perfect score of 1 indicating optimal accuracy. Additionally, the area under the curve (AUC) is measured by plotting true positives against false positives. This metric is particularly valuable as it allows for the comparison of models across different types, providing a single numerical value to assess performance. Notably, AUC is classification-threshold-invariant, meaning it assesses the predictive quality of the model regardless of the chosen classification threshold. By considering these metrics, one can gain a comprehensive understanding of the binary classification models under evaluation and make informed decisions based on their performance characteristics.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Evaluation of Regression Models&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;When assessing regression models, the metrics returned are specifically designed to estimate the amount of error present. A well-fitted model is characterized by a minimal difference between observed and predicted values. However, examining the residuals, which represent the difference between each predicted point and its corresponding actual value, provides valuable insights into potential bias within the model.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;The evaluation of regression models entails the consideration of several key metrics. The mean absolute error (MAE) measures the proximity of predictions to the actual outcomes, with a lower score indicating better accuracy. The root mean squared error (RMSE) condenses the overall error into a single value, disregarding the distinction between over-prediction and under-prediction by squaring the differences. Relative absolute error (RAE) is determined by dividing the mean difference between expected and actual values by the arithmetic mean. Similarly, the relative squared error (RSE) normalizes the total squared error of predicted values by dividing it by the total squared error of actual values.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Furthermore, the coefficient of determination, commonly referred to as R2, serves as an indicator of the model's predictive power, ranging from 0 to 1. A value of 0 signifies a random model that explains nothing, while a value of 1 signifies a perfect fit. However, caution must be exercised when interpreting R2 values, as low values can be entirely normal and high values can raise suspicions. By thoroughly considering these metrics, one can effectively evaluate the performance of regression models and make informed decisions based on their error estimation and predictive capability.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Evaluation of Clustering Models&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;When it comes to clustering models, they exhibit notable distinctions from classification and regression models, which is why Evaluate Model provides a distinct set of statistics tailored specifically for clustering models.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;The statistics furnished for clustering models offer valuable insights into various aspects, including the allocation of data points to each cluster, the degree of separation between clusters, and the compactness of data points within each cluster.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;These statistics are computed by averaging over the entire dataset and are accompanied by additional rows that present cluster-specific statistics.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;The evaluation of clustering models entails the consideration of the following metrics:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI data-leveltext="%1." data-font="Segoe UI" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;The "Average Distance to Other Center" column displays the average proximity of each point within a cluster to the centroids of all other clusters. This metric provides an indication of the overall closeness between points from one cluster and those from other clusters.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="%1." data-font="Segoe UI" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;The "Average Distance to Cluster Center" column quantifies the average distance between all points within a cluster and the centroid of that specific cluster. This metric serves as a measure of the compactness of points within each cluster.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="%1." data-font="Segoe UI" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;The "Number of Points" column showcases the count of data points assigned to each cluster, along with the total number of data points across all clusters. If the assigned data points are fewer than the total available data points, it signifies that certain points could not be allocated to any cluster.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="%1." data-font="Segoe UI" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;The "Maximal Distance to Cluster Center" column represents the maximum distance between each point and the centroid of its corresponding cluster. A higher value indicates a more widely dispersed cluster. It is advisable to review this statistic in conjunction with the "Average Distance to Cluster Center" to assess the spread of the cluster.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Additionally, at the conclusion of each section of results, a consolidated evaluation score called the "Combined Evaluation" score is provided. This score presents the average performance of the clusters created within the specific model.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;By analyzing these statistics, one can gain valuable insights into the effectiveness of clustering models, facilitating the assessment of cluster characteristics, inter-cluster separations, and the overall quality of the clustering process.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Azure Machine Learning AutoML&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Automated machine learning, also known as automated ML or AutoML, revolutionizes the development of machine learning models by automating the laborious and iterative tasks involved. This powerful capability empowers data scientists, analysts, and developers to construct ML models with remarkable scalability, efficiency, and productivity, all without compromising model quality. The implementation of automated ML within Azure Machine Learning is the result of a groundbreaking innovation stemming from Microsoft Research. With this cutting-edge technology, the process of building ML models is streamlined, enabling professionals to focus on higher-level tasks and leverage the full potential of machine learning to drive impactful outcomes.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;In Azure Machine Learning, the training process incorporates the creation of multiple pipelines in parallel, where each pipeline explores different algorithms and parameters. These iterations involve pairing ML algorithms with feature selections, resulting in models that produce training scores. The model's fitness to the data is determined by the score of the desired metric, with higher scores indicating better performance. The training process continues until the experiment's defined exit criteria are met.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;To conduct automated ML training experiments using Azure Machine Learning, the following steps can be followed:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI data-leveltext="%1." data-font="Segoe UI" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Identify the specific ML problem that needs to be addressed, such as classification, forecasting, regression, computer vision, or NLP.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="%1." data-font="Segoe UI" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Choose between a code-first experience or a no-code studio web experience. For users who prefer a code-first approach, the Azure Machine Learning SDKv2 or the Azure Machine Learning CLIv2 can be utilized. A helpful starting point is the tutorial on training an object detection model with AutoML and Python. Users who prefer a limited/no-code experience can leverage the web interface available in Azure Machine Learning studio at &lt;/SPAN&gt;&lt;A href="https://ml.azure.com/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://ml.azure.com&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt;. A tutorial on creating a classification model with automated ML in Azure Machine Learning is available to get started.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="%1." data-font="Segoe UI" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Specify the source of the labeled training data, which can be imported into Azure Machine Learning in various ways to suit your requirements.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="%1." data-font="Segoe UI" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Configure the parameters for automated machine learning, including the number of iterations for testing different models, hyperparameter settings, advanced preprocessing/featurization techniques, and the metrics to consider when determining the best model.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="%1." data-font="Segoe UI" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Submit the training job to initiate the automated ML process.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="%1." data-font="Segoe UI" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Review the results to gain insights into the performance and effectiveness of the trained models, allowing you to make informed decisions based on the experiment's outcomes.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;By following these steps, you can effectively design and execute automated ML training experiments using Azure Machine Learning, empowering you to address a wide range of ML challenges with ease and efficiency.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;AutoML Classification&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Classification is a fundamental aspect of supervised learning, where models are trained on existing data and utilize that knowledge to make predictions on new data. Azure Machine Learning offers specialized featurizations designed specifically for classification tasks, such as deep neural network text featurizers tailored to enhance the accuracy of classification models. Exploring the available featurization options can provide valuable insights into optimizing the performance of classification algorithms. Additionally, AutoML in Azure Machine Learning supports a wide range of algorithms for classification tasks, offering flexibility and versatility in model development. Classification models have a wide array of applications, including fraud detection, handwriting recognition, and object detection, among others. To gain a practical understanding of classification and automated machine learning, you can refer to a Python notebook that provides an illustrative example and further exploration of these concepts. By delving into the world of classification and automated machine learning, you can unlock powerful techniques to make accurate predictions and drive impactful outcomes in diverse domains.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-classification-task-bankmarketing/automl-classification-task-bankmarketing.ipynb" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-classification-task-bankmarketing/automl-classification-task-bankmarketing.ipynb&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;AutoML Regression&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Regression tasks, similar to classification, are an essential component of supervised learning. Azure Machine Learning provides specialized featurization techniques tailored specifically for regression problems, offering a comprehensive set of options to enhance the performance of regression models. Familiarizing yourself with these featurization options can provide valuable insights into optimizing regression algorithms. Additionally, AutoML in Azure Machine Learning supports a diverse range of algorithms for regression tasks, ensuring flexibility and adaptability in model development. Unlike classification, where predicted values are categorical, regression models aim to predict numerical output values based on independent predictors. The primary objective of regression is to establish the relationship among these independent variables by estimating how one variable influences the others. For instance, a regression model can predict automobile prices based on features such as gas mileage, safety ratings, and more. By leveraging regression techniques in Azure Machine Learning, you can uncover valuable insights and make accurate predictions in various domains, enhancing decision-making processes and driving impactful outcomes.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-regression-task-hardware-performance/automl-regression-task-hardware-performance.ipynb" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-regression-task-hardware-performance/automl-regression-task-hardware-performance.ipynb&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/Azure/MachineLearningNotebooks/blob/master/tutorials/regression-automl-nyc-taxi-data/regression-automated-ml.ipynb" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://github.com/Azure/MachineLearningNotebooks/blob/master/tutorials/regression-automl-nyc-taxi-data/regression-automated-ml.ipynb&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;AutoML Time-Series Forecasting&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Forecasting plays a crucial role in the operations of any business, whether it involves predicting revenue, inventory levels, sales, or customer demand. By harnessing the power of automated ML, businesses can leverage a combination of techniques and approaches to obtain high-quality, recommended time-series forecasts. The list of supported algorithms for automated ML can be found here, providing a diverse range of options to suit specific forecasting requirements.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;In the context of automated time-series experiments, a multivariate regression framework is employed. Historical time-series values are transformed into additional dimensions for the regressor, alongside other predictors. This approach offers a significant advantage over classical time series methods as it naturally incorporates multiple contextual variables and their interrelationships during the training process. Automated ML constructs a single model, often with internal branching, to accommodate all items and prediction horizons within the dataset. This approach allows for a more robust estimation of model parameters, enabling better generalization to unseen series.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Furthermore, advanced forecasting configurations encompass various features, such as holiday detection and featurization. Time-series and deep neural network (DNN) learners, including Auto-ARIMA, Prophet, and ForecastTCN, provide diverse options to cater to different forecasting needs. Grouping functionality extends support to many models, while rolling-origin cross-validation enables robust model evaluation. Configurable lags and rolling window aggregate features further enhance the forecasting capabilities of automated ML.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;By utilizing these advanced configurations and techniques, businesses can harness the power of automated ML to generate accurate and insightful time-series forecasts. This empowers decision-makers to make informed choices, optimize operations, and drive success in a wide range of industries.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-task-energy-demand/automl-forecasting-task-energy-demand-advanced.ipynb" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-task-energy-demand/automl-forecasting-task-energy-demand-advanced.ipynb&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;AutoML Computer Vision&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;The support for computer vision tasks in Azure Machine Learning offers a seamless and efficient approach to generating models trained on image data, catering to various scenarios such as image classification and object detection.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;By leveraging this capability, users can effortlessly integrate with the data labeling feature in Azure Machine Learning. This integration enables the utilization of labeled data to train and generate accurate image models. Additionally, the performance of these models can be optimized by specifying the desired model algorithm and fine-tuning the hyperparameters to achieve the best possible results.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Once the model generation process is complete, users have the flexibility to download the resulting model or deploy it as a web service within Azure Machine Learning. This allows for easy access and utilization of the generated models in practical applications.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;To ensure scalability and operational efficiency, Azure Machine Learning provides MLOps and ML Pipelines capabilities. These powerful features enable users to operationalize their computer vision models at scale, streamlining the deployment and management processes.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;The authoring of AutoML models for vision tasks is facilitated through the Azure Machine Learning Python SDK, providing a user-friendly and intuitive development experience. Furthermore, the Azure Machine Learning studio UI allows easy access to experimentation jobs, models, and outputs, providing a comprehensive and accessible interface for managing and analyzing computer vision projects.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;These tasks include multi-class image classification, multi-label image classification, object detection, and instance segmentation.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;In multi-class image classification, the goal is to classify an image into a single label from a predefined set of classes. For example, an image can be classified as a 'cat,' 'dog,' or 'duck' based on its content.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-classification-multiclass-task-fridge-items" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-classification-multiclass-task-fridge-items&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;On the other hand, multi-label image classification involves assigning multiple labels to an image from a given set of labels. This means an image can be labeled as both a 'cat' and a 'dog' simultaneously, based on its characteristics.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-classification-multilabel-task-fridge-items" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-classification-multilabel-task-fridge-items&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;For object detection tasks, the objective is to identify and locate objects within an image by drawing bounding boxes around them. For instance, an algorithm can be used to detect and locate all instances of 'dogs' and 'cats' within an image, outlining each object with a bounding box.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Lastly, instance segmentation tasks involve identifying objects within an image at the pixel level. This means drawing a polygon around each object present in the image, providing a more precise delineation of their boundaries.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-instance-segmentation-task-fridge-items" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-instance-segmentation-task-fridge-items&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;By harnessing the support for computer vision tasks in Azure Machine Learning, users can unlock the potential of image data and build powerful models for image classification and object detection. This empowers businesses to leverage visual information for enhanced decision-making, automation, and transformative experiences across various domains.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models?view=azureml-api-2&amp;amp;tabs=cli" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://learn.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models?view=azureml-api-2&amp;amp;tabs=cli&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;AutoML Natural Language Processing&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;The support for natural language processing (NLP) tasks in automated ML within Azure Machine Learning offers a seamless and efficient approach to generating models trained on text data. This capability caters to various scenarios, including text classification and named entity recognition.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;By leveraging automated ML, users can easily author and train NLP models using the Azure Machine Learning Python SDK. This user-friendly development experience enables the creation of powerful NLP models for a wide range of applications.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;The resulting experimentation jobs, models, and outputs can be conveniently accessed and managed through the Azure Machine Learning studio UI. This intuitive interface provides a comprehensive overview of NLP projects, allowing users to analyze and optimize their models effectively.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;The NLP capability within Azure Machine Learning encompasses several key features. It supports end-to-end deep neural network training with the latest pre-trained BERT models, ensuring state-of-the-art performance in NLP tasks. Additionally, seamless integration with Azure Machine Learning data labeling simplifies the process of using labeled data for generating NLP models. The NLP capability also offers multi-lingual support, with the ability to process text in 104 different languages. Finally, distributed training with Horovod enables efficient and scalable NLP model training.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;By leveraging the NLP support in Azure Machine Learning, businesses can unlock the power of text data and build sophisticated models for tasks such as text classification and named entity recognition. This empowers organizations to extract valuable insights from textual information, automate processes, and make informed decisions in a wide range of industries and domains.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-nlp-models?view=azureml-api-2&amp;amp;tabs=cli" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://learn.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-nlp-models?view=azureml-api-2&amp;amp;tabs=cli&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Open Source Models&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Once you have selected a model that suits your needs, Azure Machine Learning offers a suite of tools to enhance your model development and deployment process. This flexibility allows you to leverage popular frameworks such as &lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/how-to-train-scikit-learn?view=azureml-api-2" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;SciKitLearn&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt;, &lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/how-to-train-tensorflow?view=azureml-api-2" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;TensorFlow&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt;, &lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/how-to-train-pytorch?view=azureml-api-2" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;PyTorch&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt;, and &lt;/SPAN&gt;&lt;A href="https://xgboost.readthedocs.io/en/stable/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;XGBoost&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt; to create and train your own custom models.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Whether you are starting from scratch and training a machine learning model using scikit-learn, or you already have an existing model that you want to bring into the cloud, Azure Machine Learning provides the infrastructure to scale out your training jobs using elastic cloud compute resources. This ensures that you can efficiently handle large-scale training tasks and take full advantage of the cloud's scalability.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Furthermore, Azure Machine Learning enables you to build, deploy, version, and monitor production-grade models seamlessly. With robust tools and functionalities, you can ensure the smooth transition from model development to deployment, and effectively manage and monitor your models in a production environment.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;By leveraging Azure Machine Learning's support for custom models and its comprehensive set of tools, you can accelerate your model development process, improve scalability, and seamlessly deploy and manage your models in production. This empowers data scientists and developers to deliver cutting-edge machine learning solutions and drive impactful outcomes in various industries and domains.&amp;nbsp; &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/how-to-train-scikit-learn?view=azureml-api-2" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://learn.microsoft.com/en-us/azure/machine-learning/how-to-train-scikit-learn?view=azureml-api-2&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/sqlshep/PredictiveMaintenance/tree/main/XGBoost" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://github.com/sqlshep/PredictiveMaintenance/tree/main/XGBoost&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/sqlshep/PredictiveMaintenance/tree/main/LSTM" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://github.com/sqlshep/PredictiveMaintenance/tree/main/LSTM&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Open Source and MLFlow&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Managing the complete lifecycle of machine learning models can be a complex task. However, with MLflow, an open-source framework, this process becomes much more streamlined. MLflow offers a comprehensive solution for efficiently managing models across various platforms, ensuring a consistent set of tools regardless of where your experiments are running.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;One of the standout features of MLflow is its ability to train and serve models on different platforms. Whether you're conducting experiments on your local computer, a remote compute target, a virtual machine, or an Azure Machine Learning compute instance, MLflow allows you to utilize the same set of tools. This flexibility enables you to focus on your tasks without worrying about the underlying platform.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;For users of Azure Machine Learning workspaces, MLflow compatibility is a game-changer. It means that you can leverage Azure Machine Learning workspaces just as you would an MLflow server. This compatibility offers several advantages:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI data-leveltext="%1." data-font="Segoe UI" data-listid="3" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;No need for hosting MLflow server instances: With Azure Machine Learning workspaces, the hassle of setting up and managing MLflow server instances is eliminated. The workspace effortlessly communicates in the MLflow API language, making the entire process seamless.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="%1." data-font="Segoe UI" data-listid="3" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Azure Machine Learning workspaces as tracking servers: Regardless of whether your MLflow code runs on Azure Machine Learning or not, you can configure MLflow to point to your Azure Machine Learning workspace for tracking purposes. This allows you to take advantage of Azure Machine Learning's robust tracking capabilities effortlessly.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="%1." data-font="Segoe UI" data-listid="3" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Seamless integration with Azure Machine Learning: Thanks to MLflow compatibility, you can seamlessly run any training routine that utilizes MLflow on Azure Machine Learning without any modifications. This ensures a smooth integration of MLflow into your existing workflows on the Azure Machine Learning platform.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/concept-mlflow?view=azureml-api-2" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://learn.microsoft.com/en-us/azure/machine-learning/concept-mlflow?view=azureml-api-2&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Sample Notebooks using MLFlow&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Training and tracking an XGBoost classifier with MLflow &lt;/SPAN&gt;&lt;/STRONG&gt;&lt;A href="https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-and-log/xgboost_classification_mlflow.ipynb" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-and-log/xgboost_classification_mlflow.ipynb&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Hyper-parameter optimization using HyperOpt and nested runs in MLflow &lt;/SPAN&gt;&lt;/STRONG&gt;&lt;A href="https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-and-log/xgboost_nested_runs.ipynb" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-and-log/xgboost_nested_runs.ipynb&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Logging models with MLflow&lt;/SPAN&gt;&lt;/STRONG&gt; &lt;A href="https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-and-log/logging_and_customizing_models.ipynb" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-and-log/logging_and_customizing_models.ipynb&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Manage runs and experiments with MLFlow &lt;/SPAN&gt;&lt;/STRONG&gt;&lt;A href="https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/runs-management/run_history.ipynb" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/runs-management/run_history.ipynb&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;Principal author:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN&gt;Shep Sheppard&amp;nbsp;&lt;/SPAN&gt;| Senior Customer Engineer, FastTrack for ISV and Startups&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Other contributors:&lt;/P&gt;
&lt;UL&gt;
&lt;LI class="p1"&gt;Yoav Dobrin Principal Customer Engineer, FastTrack for ISV and Startups&lt;/LI&gt;
&lt;LI class="p1"&gt;Jones Jebaraj | Senior Customer Engineer, FastTrack for ISV and Startups&lt;/LI&gt;
&lt;LI class="p1"&gt;Olga Molocenco-Ciureanu | Customer Engineer, FastTrack for ISV and Startups&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 16 Apr 2024 14:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-partners/02-model-and-capability-evaluation-pre-fab-oss-fine-tuning/m-p/4089268#M209</guid>
      <dc:creator>shepsheppard</dc:creator>
      <dc:date>2024-04-16T14:00:00Z</dc:date>
    </item>
    <item>
      <title>01 Getting Started with Data in Azure Machine Learning</title>
      <link>https://techcommunity.microsoft.com/t5/azure-partners/01-getting-started-with-data-in-azure-machine-learning/m-p/4089233#M208</link>
      <description>&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;The Azure Machine Learning Datastore&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Azure Machine Learning (AML) has a concept of a &lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/how-to-datastore" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;datastore&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt; that allows you to easily reference an existing storage account via API &lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;that offers a wide range of capabilities for interacting with various storage types, such as Blob, Files, and ADLS. Notably, this API is designed to facilitate the effortless discovery of valuable datastores within team operations, thereby enhancing operational efficiency.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;One salient feature of this API is its implementation of a secure approach to connection information management. Users can leverage credential-based access, whether through a service principle, SAS (shared access signatures), or key, to ensure the confidentiality and integrity of their data. By employing this methodology, the need to embed sensitive connection details within scripts is eliminated, mitigating potential security risks.&amp;nbsp; Datastores become very useful when you start to setup automation using AML Pipelines or start your journey into MLOps (Machine Learning Ops).&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Datastores give you the ability to access Azure Blob storage, ADLS Gen 2, Azure Files, and Microsoft Fabric OneLake. The following repo will get you started on creating your first AML Datastore.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Github Repo&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/Azure/azureml-examples/blob/main/sdk/python/resources/datastores/datastore.ipynb" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://github.com/Azure/azureml-examples/blob/main/sdk/python/resources/datastores/datastore.ipynb&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Azure Machine Learning “Connections”&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;In certain scenarios, data may not be housed within Azure Blob Storage or OneLake, but rather in S3. In such cases, Azure Machine Learning (AML) offers a viable solution by enabling the creation of connections to data residing in Snowflake, Azure SQL DB, or even AWS S3. &lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;When data is stored in Snowflake, Azure SQL DB, or AWS S3, AML presents the option to create a connection, seamlessly bridging the gap between external data sources and AML's analytical capabilities. By establishing a connection, users can effortlessly access and analyze data from these disparate sources, without the burden of intricate configuration or manual intervention.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;AML prioritizes data security and ensures that credentials are securely stored. The Connection feature in AML securely stores credentials within the Workspace Key Vault. Once credentials are stored, direct interaction with the Key Vault becomes unnecessary, as AML handles authentication and authorization seamlessly in the background. This approach ensures the confidentiality and integrity of your credentials, bolstering the overall security posture of your data.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Github repository,&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/Azure/azureml-examples/blob/main/sdk/python/resources/connections/connections.ipynb" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://github.com/Azure/azureml-examples/blob/main/sdk/python/resources/connections/connections.ipynb&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Azure Machine Learning Data Asset&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;An Azure Machine Learning &lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/how-to-create-data-assets" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;data asset&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt; can be likened to web browser bookmarks or favorites. Rather than having to recall lengthy storage paths (URIs) that direct you to your frequently accessed data, you have the option to create a data asset and conveniently access it using a friendly name.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;When a data asset is created, it not only establishes a reference to the data source location but also retains a copy of its metadata. Since the data remains in its original location, there are no additional storage costs incurred, and the integrity of the data source is not compromised. Data assets can be created from various sources such as Azure Machine Learning datastores, Azure Storage, public URLs, or local files.&amp;nbsp; Azure Data Assets can be created using the Azure CLI, Python SDK or in the AML Studio.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Azure Machine Learning MLTable&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;With Azure Machine Learning, you can utilize a &lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/machine-learning/how-to-mltable?view=azureml-api-2" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Table type (mltable)&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt; to create a blueprint that specifies the process of loading data files into memory as either a Pandas or Spark data frame.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;An MLTable file is a YAML-based file that serves as a blueprint for data loading. It allows you to define various aspects of the data loading process. Within the MLTable file, you have the flexibility to specify the storage location(s) of the data, which can be local, in the cloud, or on a public http(s) server. You can utilize globbing patterns over cloud storage to specify sets of filenames using wildcard characters (*). This provides a convenient way to handle multiple files within a specified location.&amp;nbsp; The MLTable file also allows you to define read transformations, such as specifying the file format type (delimited text, Parquet, Delta, json), delimiters, headers, and more. This ensures that the data is read correctly based on its format.&amp;nbsp; Lastly, you have the ability to define subsets of data to load. This includes filtering rows, keeping or dropping specific columns, and taking random samples. These options provide flexibility in loading only the necessary data for your specific needs.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Github Repo&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/Azure/azureml-examples/tree/main/sdk/python/using-mltable" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://github.com/Azure/azureml-examples/tree/main/sdk/python/using-mltable&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Reading from a Delta Table&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Delta table is a significant component of the Delta Lake open-source data framework. Typically employed in data lakes, Delta tables facilitate data ingestion through streaming or large batch processes. They serve as an open-source storage layer that enhances the reliability of data lakes by introducing a transactional storage layer atop cloud storage systems such as AWS S3, Azure Storage, and GCS. This integration enables features like ACID (atomicity, consistency, isolation, durability) transactions, data versioning, and rollback capabilities, streamlining the handling of both batch and streaming data in a unified manner. Delta tables, built on this storage layer, offer a table abstraction that simplifies working with extensive structured data using SQL and the DataFrame API.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;AML MLTable also supports the Delta format for reading data and even converting it to a Pandas Dataframe.&amp;nbsp; Git Repo&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mltable/delta-lake-example/delta-lake-example.ipynb" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mltable/delta-lake-example/delta-lake-example.ipynb&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Azure machine learning – Managed Feature store&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Managed feature store in azure machine learning provides a centralized repository that enables data scientists and machine learning professionals to independently develop, productionize and also share features with other business units within your organizations. Features serve as the input data for your model. With a feature set specification, the system handles serving, securing, and monitoring of the features. This frees you from the overhead of underlying feature engineering pipeline set-up and management.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Feature store allows you to search and reuse features created by your team, to avoid redundant work and deliver consistent predictions. Any new derived features created with transformations, can address feature engineering requirements in an agile, dynamic way across multiple workspaces when shared. The system operates and manages the feature engineering pipelines required for transformation and materialization to free your team from the operational aspects.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://techcommunity.microsoft.com/t5/ai-machine-learning-blog/announcing-managed-feature-store-in-azure-machine-learning/ba-p/3823043" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://techcommunity.microsoft.com/t5/ai-machine-learning-blog/announcing-managed-feature-store-in-azure-machine-learning/ba-p/3823043&lt;/SPAN&gt;&lt;/A&gt; &lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;I&gt;&lt;SPAN data-contrast="auto"&gt;Conclusion: This blog has covered several mechanisms for accessing data across a variety of sources. Each method offers features such as convenience, security, or cost savings which should be balanced against the requirements of the situation. In each case, a GitHub resource has been provided as a practical example to assist in learning about data management in the cloud.&amp;nbsp;&lt;/SPAN&gt;&lt;/I&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Principal author:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN&gt;Shep Sheppard&amp;nbsp;&lt;/SPAN&gt;| Senior Customer Engineer, FastTrack for ISV and Startups&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Other contributors:&lt;/P&gt;
&lt;UL&gt;
&lt;LI class="p1"&gt;Yoav Dobrin Principal Customer Engineer, FastTrack for ISV and Startups&lt;/LI&gt;
&lt;LI class="p1"&gt;Jones Jebaraj | Senior Customer Engineer, FastTrack for ISV and Startups&lt;/LI&gt;
&lt;LI class="p1"&gt;Olga Molocenco-Ciureanu | Customer Engineer, FastTrack for ISV and Startups&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 09 Apr 2024 14:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-partners/01-getting-started-with-data-in-azure-machine-learning/m-p/4089233#M208</guid>
      <dc:creator>shepsheppard</dc:creator>
      <dc:date>2024-04-09T14:00:00Z</dc:date>
    </item>
  </channel>
</rss>

