azure
7729 TopicsAzure API Management Your Auth Gateway For MCP Servers
The Model Context Protocol (MCP) is quickly becoming the standard for integrating Tools đ ď¸ with Agents đ¤ and Azure API Management is at the fore-front, ready to support this open-source protocol đ. You may have already encountered discussions about MCP, so let's clarify some key concepts: Model Context Protocol (MCP) is a standardized way, (a protocol), for AI models to interact with external tools, (and either read data or perform actions) and to enrich context for ANY language models. AI Agents/Assistants are autonomous LLM-powered applications with the ability to use tools to connect to external services required to accomplish tasks on behalf of users. Tools are components made available to Agents allowing them to interact with external systems, perform computation, and take actions to achieve specific goals. Azure API Management: As a platform-as-a-service, API Management supports the complete API lifecycle, enabling organizations to create, publish, secure, and analyze APIs with built-in governance, security, analytics, and scalability. New Cool Kid in Town - MCP AI Agents are becoming widely adopted due to enhanced Large Language Model (LLM) capabilities. However, even the most advanced models face limitations due to their isolation from external data. Each new data source requires custom implementations to extract, prepare, and make data accessible for any model(s). - A lot of heavy lifting. Anthropic developed an open-source standard - the Model Context Protocol (MCP), to connect your agents to external data sources such as local data sources (databases or computer files) or remote services (systems available over the internet through e.g. APIs). MCP Hosts: LLM applications such as chat apps or AI assistant in your IDEs (like GitHub Copilot in VS Code) that need to access external capabilities MCP Clients: Protocol clients that maintain 1:1 connections with servers, inside the host application MCP Servers: Lightweight programs that each expose specific capabilities and provide context, tools, and prompts to clients MCP Protocol: Transport layer in the middle At its core, MCP follows a client-server architecture where a host application can connect to multiple servers. Whenever your MCP host or client needs a tool, it is going to connect to the MCP server. The MCP server will then connect to for example a database or an API. MCP hosts and servers will connect with each other through the MCP protocol. You can create your own custom MCP Servers that connect to your or organizational data sources. For a quick start, please visit our GitHub repository to learn how to build a remote MCP server using Azure Functions without authentication: https://aka.ms/mcp-remote Remote vs. Local MCP Servers The MCP standard supports two modes of operation: Remote MCP servers: MCP clients connect to MCP servers over the Internet, establishing a connection using HTTP and Server-Sent Events (SSE), and authorizing the MCP client access to resources on the user's account using OAuth. Local MCP servers: MCP clients connect to MCP servers on the same machine, using stdio as a local transport method. Azure API Management as the AI Auth Gateway Now that we have learned that MCP servers can connect to remote services through an API. The question now rises, how can we expose our remote MCP servers in a secure and scalable way? This is where Azure API Management comes in. A way that we can securely and safely expose tools as MCP servers. Azure API Management provides: Security: AI agents often need to access sensitive data. API Management as a remote MCP proxy safeguards organizational data through authentication and authorization. Scalability: As the number of LLM interactions and external tool integrations grows, API Management ensures the system can handle the load. Security remains to be a critical piece of building MCP servers, as agents will need to securely connect to protected endpoints (tools) to perform certain actions or read protected data. When building remote MCP servers, you need a way to allow users to login (Authenticate) and allow them to grant the MCP client access to resources on their account (Authorization). MCP - Current Authorization Challenges State: 4/10/2025 Recent changes in MCP authorization have sparked significant debate within the community. đ đđ˛đ đđľđŽđšđšđ˛đťđ´đ˛đ with the Authorization Changes: The MCP server is now treated as both a resource server AND an authorization server. This dual role has fundamental implications for MCP server developers and runtime operations. đĄ đ˘đđż đŚđźđšđđđśđźđť: To address these challenges, we recommend using đđđđżđ˛ đđŁđ đ đŽđťđŽđ´đ˛đşđ˛đťđ as your authorization gateway for remote MCP servers. đFor an enterprise-ready solution, please check out our azd up sample repo to learn how to build a remote MCP server using Azure API Management as your authentication gateway: https://aka.ms/mcp-remote-apim-auth The Authorization Flow The workflow involves three core components: the MCP client, the APIM Gateway, and the MCP server, with Microsoft Entra managing authentication (AuthN) and authorization (AuthZ). Using the OAuth protocol, the client starts by calling the APIM Gateway, which redirects the user to Entra for login and consent. Once authenticated, Entra provides an access token to the Gateway, which then exchanges a code with the client to generate an MCP server token. This token allows the client to communicate securely with the server via the Gateway, ensuring user validation and scope verification. Finally, the MCP server establishes a session key for ongoing communication through a dedicated message endpoint. Diagram source: https://aka.ms/mcp-remote-apim-auth-diagram Conclusion Azure API Management (APIM) is an essential tool for enterprise customers looking to integrate AI models with external tools using the Model Context Protocol (MCP). In this blog, we've emphasized the simplicity of connecting AI agents to various data sources through MCP, streamlining previously complex implementations. Given the critical role of secure access to platforms and services for AI agents, APIM offers robust solutions for managing OAuth tokens and ensuring secure access to protected endpoints, making it an invaluable asset for enterprises, despite the challenges of authentication. API Management: An Enterprise Solution for Securing MCP Servers Azure API Management is an essential tool for enterprise customers looking to integrate AI models with external tools using the Model Context Protocol (MCP). It is designed to help you to securely expose your remote MCP servers. MCP servers are still very new, and as the technology evolves, API Management provides an enterprise-ready solution that will evolve with the latest technology. Stay tuned for further feature announcements soon! Acknowledgments This post and work was made possible thanks to the hard work and dedication of our incredible team. Special thanks to Pranami Jhawar, Julia Kasper, Julia Muiruri, Annaji Sharma Ganti Jack Pa, Chaoyi Yuan and Alex Vieira for their invaluable contributions. Additional Resources MCP Client Server integration with APIM as AI gateway Blog Post: https://aka.ms/remote-mcp-apim-auth-blog Sequence Diagram: https://aka.ms/mcp-remote-apim-auth-diagram APIM lab: https://aka.ms/ai-gateway-lab-mcp-client-auth Python: https://aka.ms/mcp-remote-apim-auth .NET: https://aka.ms/mcp-remote-apim-auth-dotnet On-Behalf-Of Authorization: https://aka.ms/mcp-obo-sample 3rd Party APIs â Backend Auth via Credential Manager: Blog Post: https://aka.ms/remote-mcp-apim-lab-blog APIM lab: https://aka.ms/ai-gateway-lab-mcp YouTube Video: https://aka.ms/ai-gateway-lab-demo20KViews11likes4CommentsAmericas & EMEA Fabric Engineering Connection
đ Excited to announce the upcoming Fabric Engineering Connection call for Microsoft partners! Join us on Wednesday, December 10, from 8â9 am PT (Americas & EMEA) and Thursday, December 11, from 1â2 am UTC (APAC) for an insightful session featuring Erin Stellato and Mark Brown. This weekâs focus: đŻ GitHub Copilot in SSMS with Fabric SQL đŻ User-Data Function Integration with Cosmos DB in Fabric Donât miss the opportunity to learn directly from the experts and discover the latest innovations in Microsoft Fabric. To participate, make sure youâre a member of the Fabric Partner Community Teams Channel. Join here: https://lnkd.in/g_PRdfjt Letâs connect, learn, and shape the future of data together! đĄ4Views0likes0CommentsAPAC Fabric Engineering Connection call
đ Excited to announce the upcoming Fabric Engineering Connection call for Microsoft partners! Join us on Wednesday, December 10, from 8â9 am PT (Americas & EMEA) and Thursday, December 11, from 1â2 am UTC (APAC) for an insightful session featuring Erin Stellato and Mark Brown. This weekâs focus: đŻ GitHub Copilot in SSMS with Fabric SQL đŻ User-Data Function Integration with Cosmos DB in Fabric Donât miss the opportunity to learn directly from the experts and discover the latest innovations in Microsoft Fabric. To participate, make sure youâre a member of the Fabric Partner Community Teams Channel. Join here: https://lnkd.in/g_PRdfjt Letâs connect, learn, and shape the future of data together! đĄ5Views0likes0CommentsUnlocking AI-Driven Data Access: Azure Database for MySQL Support via the Azure MCP Server
Step into a new era of data-driven intelligence with the fusion of Azure MCP Server and Azure Database for MySQL, where your MySQL data is no longer just stored, but instantly conversational, intelligent and action-ready. By harnessing the open-standard Model Context Protocol (MCP), your AI agents can now query, analyze and automate in natural language, accessing tables, surfacing insights and acting on your MySQL-driven business logic as easily as chatting with a colleague. Itâs like giving your data a voice and your applications a brain, all within Azureâs trusted cloud platform. We are excited to announce that we have added support for Azure Database for MySQL in Azure MCP Server. The Azure MCP Server leverages the Model Context Protocol (MCP) to allow AI agents to seamlessly interact with various Azure services to perform context-aware operations such as querying databases and managing cloud resources. Building on this foundation, the Azure MCP Server now offers a set of tools that AI agents and apps can invoke to interact with Azure Database for MySQL - enabling them to list and query databases, retrieve schema details of tables, and access server configurations and parameters. These capabilities are delivered through the same standardized interface used for other Azure services, making it easier to the adopt the MCP standard for leveraging AI to work with your business data and operations across the Azure ecosystem. Before we delve into these new tools and explore how to get started with them, letâs take a moment to refresh our understanding of MCP and the Azure MCP Server - what they are, how they work, and why they matter. MCP architecture and key components The Model Context Protocol (MCP) is an emerging open protocol designed to integrate AI models with external data sources and services in a scalable, standardized, and secure manner. MCP dictates a client-server architecture with four key components: MCP Host, MCP Client, MCP Server and external data sources, services and APIs that provide the data context required to enhance AI models. To explain briefly, an MCP Host (AI apps and agents) includes an MCP client component that connects to one or more MCP Servers. These servers are lightweight programs that securely interface with external data sources, services and APIs and exposes them to MCP clients in the form of standardized capabilities called tools, resources and prompts. Learn more: MCP Documentation What is Azure MCP Server? Azure offers a multitude of cloud services that help developers build robust applications and AI solutions to address business needs. The Azure MCP Server aims to expose these powerful services for agentic usage, allowing AI systems to perform operations that are context-aware of your Azure resources and your business data within them, while ensuring adherence to the Model Context Protocol. It supports a wide range of Azure services and tools including Azure AI Search, Azure Cosmos DB, Azure Storage, Azure Monitor, Azure CLI and Developer CLI extensions. This means that you can empower AI agents, apps and tools to: Explore your Azure resources, such as listing and retrieving details on your Azure subscriptions, resource groups, services, databases, and tables. Search, query and analyze your data and logs. Execute CLI and Azure Developer CLI commands directly, and more! Learn more: Azure MCP Server GitHub Repository Introducing new Azure MCP Server tools to interact with Azure Database for MySQL The Azure MCP Server now includes the following tools that allow AI agents to interact with Azure Database for MySQL and your valuable business data residing in these servers, in accordance with the MCP standard: Tool Description Example Prompts azmcp_mysql_server_list List all MySQL servers in a subscription & resource group "List MySQL servers in resource group 'prod-rg'." "Show MySQL servers in region 'eastus'." azmcp_mysql_server_config_get Retrieve the configuration of a MySQL server "What is the backup retention period for server 'my-mysql-server'?" "Show storage allocation for server 'my-mysql-server'." azmcp_mysql_server_param_get Retrieve a specific parameter of a MySQL server "Is slow_query_log enabled on server my-mysql-server?" "Get innodb_buffer_pool_size for server my-mysql-server." azmcp_mysql_server_param_set Set a specific parameter of a MySQL server to a specific value "Set max_connections to 500 on server my-mysql-server." "Set wait_timeout to 300 on server my-mysql-server." azmcp_mysql_table_list List all tables in a MySQL database "List tables starting with 'tmp_' in database 'appdb'." "How many tables are in database 'analytics'?" azmcp_mysql_table_schema_get Get the schema of a specific table in a MySQL database "Show indexes for table 'transactions' in database 'billing'." "What is the primary key for table 'users' in database 'auth'?" azmcp_mysql_database_query Executes a SELECT query on a MySQL Database. The query must start with SELECT and cannot contain any destructive SQL operations for security reasons. âHow many orders were placed in the last 30 days in the salesdb.orders table?â âShow the number of new users signed up in the last week in appdb.users grouped by day.â These interactions are secured using Microsoft Entra authentication, which enables seamless, identity-based access to Azure Database for MySQL - eliminating the need for password storage and enhancing overall security. How are these new tools in the Azure MCP Server different from the standalone MCP Server for Azure Database for MySQL? We have integrated the key capabilities of the Azure Database for MySQL MCP server into the Azure MCP Server, making it easier to connect your agentic apps not only to Azure Database for MySQL but also to other Azure services through one unified and secure interface! How to get started Installing and running the Azure MCP Server is quick and easy! Use GitHub Copilot in Visual Studio Code to gain meaningful insights from your business data in Azure Database for MySQL. Pre-requisites Install Visual Studio Code. Install GitHub Copilot and GitHub Copilot Chat extensions. An Azure Database for MySQL with Microsoft Entra authentication enabled. Ensure that the MCP Server is installed on a system with network connectivity and credentials to connect to Azure Database for MySQL. Installation and Testing Please use this guide for installation: Azure MCP Server Installation Guide Try the following prompts with your Azure Database for MySQL: Azure Database for MySQL tools for Azure MCP Server Try it out and share your feedback! Start using Azure MCP Server with the MySQL tools today and let our cloud services become your AI agentâs most powerful ally. Weâre counting on your feedback - every comment, suggestion, or bug-report helps us build better tools together. Stay tuned: more features and capabilities are on the horizon! Feel free to comment below or write to us with your feedback and queries at AskAzureDBforMySQL@service.microsoft.com.56Views0likes0CommentsI'm stuck!
Logically, I'm not sure how\if I can do this. I want to monitor for EntraID Group additions - I can get this to work for a single entry using this: AuditLogs | where TimeGenerated > ago(7d) | where OperationName == "Add member to group" | where TargetResources[0].type == "User" | extend GroupName = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[1].newValue))) | where GroupName == "NameOfGroup" <-- This returns the single entry | extend User = tostring(TargetResources[0].userPrincipalName) | summarize ['Count of Users Added']=dcount(User), ['List of Users Added']=make_set(User) by GroupName | sort by GroupName asc However, I have a list of 20 Priv groups that I need to monitor. I can do this using: let PrivGroups = dynamic[('name1','name2','name3'}); and then call that like this: blahblah | where TargetResources[0].type == "User" | extend GroupName = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[1].newValue))) | where GroupName has_any (PrivGroup) But that's a bit dirty to update - I wanted to call a watchlist. I've tried defining with: let PrivGroup = (_GetWatchlist('TestList')); and tried calling like: blahblah | where TargetResources[0].type == "User" | extend GroupName = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[1].newValue))) | where GroupName has_any ('PrivGroup') I've tried dropping the let and attempted to lookup the watchlist directly: | where GroupName has_any (_GetWatchlist('TestList')) The query runs but doesn't return any results (Obvs I know the result exists) - How do I lookup that extracted value on a Watchlist. Any ideas or pointers why I'm wrong would be appreciated! Many thanks11Views0likes0CommentsApplying DevOps Principles on Lean Infrastructure. Lessons From Scaling to 102K Users.
Hi Azure Community, I'm a Microsoft Certified DevOps Engineer, and I want to share an unusual journey. I have been applying DevOps principles on traditional VPS infrastructure to scale to 102,000 users with 99.2% uptime. Why am I posting this in an Azure community? Because I'm planning migration to Azure in 2026, and I want to understand: What mistakes am I already making that will bite me during migration? THE CURRENT SETUP Platform: Social commerce (West Africa) Users: 102,000 active Monthly events: 2 million Uptime: 99.2% Infrastructure: Single VPS Stack: PHP/Laravel, MySQL, Redis Yes - one VPS. No cloud. No Kubernetes. No microservices. WHY I HAVEN'T USED AZURE YET Honest answer: Budget constraints in emerging market startup ecosystem. At our current scale, fully managed Azure services would significantly increase monthly burn before product-market expansion. The funding we raised needs to last through growth milestones. The trade: I manually optimize what Azure would auto-scale. I debug what Application Insights would catch. I do by hand what Azure Functions would automate. DEVOPS PRACTICES THAT KEPT US RUNNING Even on single-server infrastructure, core DevOps principles still apply: CI/CD Pipeline (GitHub Actions) ⢠3-5 deployments weekly ⢠Zero-downtime deploys ⢠Automated rollback on health check failures ⢠Feature flags for gradual rollouts Monitoring & Observability ⢠Custom monitoring (would love Application Insights) ⢠Real-time alerting ⢠Performance tracking and slow query detection ⢠Resource usage monitoring Automation ⢠Automated backups ⢠Automated database optimization ⢠Automated image compression ⢠Automated security updates Infrastructure as Code ⢠Configs in Git ⢠Deployment scripts ⢠Environment variables ⢠Documented procedures Testing & Quality ⢠Automated test suite ⢠Pre-deployment health checks ⢠Staging environment ⢠Post-deployment verification KEY OPTIMIZATIONS Async Job Processing ⢠Upload endpoint: 8 seconds â 340ms ⢠4x capacity increase Database Optimization ⢠Feed loading: 6.4 seconds â 280ms ⢠Strategic caching ⢠Batch processing Image Compression ⢠3-8MB â 180KB (94% reduction) ⢠Critical for mobile users Caching Strategy ⢠Redis for hot data ⢠Query result caching ⢠Smart invalidation Progressive Enhancement ⢠Server-rendered pages ⢠2-3 second loads on 4G WHAT I'M WORRIED ABOUT FOR AZURE MIGRATION This is where I need your help: Architecture Decisions ⢠App Service vs Functions + managed services? ⢠MySQL vs Azure SQL? ⢠When does cost/benefit flip for managed services? Cost Management ⢠How do startups manage Azure costs during growth? ⢠Reserved instances vs pay-as-you-go? ⢠Which Azure services are worth the premium? Migration Strategy ⢠Lift-and-shift first, or re-architect immediately? ⢠Zero-downtime migration with 102K active users? ⢠Validation approach before full cutover? Monitoring & DevOps ⢠Application Insights - worth it from day one? ⢠Azure DevOps vs GitHub Actions for Azure deployments? ⢠Operational burden reduction with managed services? Development Workflow ⢠Local development against Azure services? ⢠Cost-effective staging environments? ⢠Testing Azure features without constant bills? MY PLANNED MIGRATION PATH Phase 1: Hybrid (Q1 2026) ⢠Azure CDN for static assets ⢠Azure Blob Storage for images ⢠Application Insights trial ⢠Keep compute on VPS Phase 2: Compute Migration (Q2 2026) ⢠App Service for API ⢠Azure Database for MySQL ⢠Azure Cache for Redis ⢠VPS for background jobs Phase 3: Full Azure (Q3 2026) ⢠Azure Functions for processing ⢠Full managed services ⢠Retire VPS QUESTIONS FOR THIS COMMUNITY Question 1: Am I making migration harder by waiting? Should I have started with Azure at higher cost to avoid technical debt? Question 2: What will break when I migrate? What works on VPS but fails in cloud? What assumptions won't hold? Question 3: How do I validate before cutting over? Parallel infrastructure? Gradual traffic shift? Safe patterns? Question 4: Cost optimization from day one? What to optimize immediately vs later? Common cost mistakes? Question 5: DevOps practices that transfer? What stays the same? What needs rethinking for cloud-native? THE BIGGER QUESTION Have you migrated from self-hosted to Azure? What surprised you? I know my setup isn't best practice by Azure standards. But it's working, and I've learned optimization, monitoring, and DevOps fundamentals in practice. Will those lessons transfer? Or am I building habits that cloud will expose as problematic? Looking forward to insights from folks who've made similar migrations. --- About the Author: Microsoft Certified DevOps Engineer and Azure Developer. CTO at social commerce platform scaling in West Africa. Preparing for phased Azure migration in 2026. P.S. I got the Azure certifications to prepare for this migration. Now I need real-world wisdom from people who've actually done it!11Views0likes0CommentsDeploying Windows Servers in an Azure Availability Set
Deploying Windows Servers in an Azure Availability Set This guide demonstrates deploying Windows Server an Azure Availability Set for Windows Server IIS workloads. An availability set logically groups virtual machines across fault domains and update domains within a single Azure data center. Fault domains provide physical hardware isolation (separate racks, power, and network switches), while update domains ensure Azure staggers platform maintenance, rebooting only one domain at a time with 30-minute recovery windows. VMs must be assigned to availability sets during creation and you cannot add existing VMs later. Creating the First VM Navigate to Azure Portal > Virtual Machines > Create Create a new resource group (e.g., "Zava IIS") Name the VM (e.g., "Zava IIS 1") and select region (e.g., East US 2) Under Availability Options, select "Availability set" > Create New Name the availability set and accept defaults (2 fault domains, 2 update domains) Configure local admin account (avoid using "Administrator") Select "No inbound ports" for security Enable Azure Hybrid Benefit if you have existing Windows Server licenses Verify Premium SSD is selected under Disks (required for 99.95% SLA) Note the virtual network name for subsequent VMs Under Management, disable automatic shutdown and hotpatch Under Monitoring, disable boot diagnostics Review and create the VMAvailability-Set-Audio-Pre-avatar.txtâ Creating the Second VM Return to Virtual Machines > Create Use the same resource group Name the second VM (e.g., "Zava IIS 2") Select the existing availability set created in step 4 above Match all settings from the first VM (admin account, no inbound ports, hybrid benefit, Premium SSD) Ensure the VM connects to the same virtual network as the first VM Disable auto shutdown, hotpatch, and boot diagnostics Review and create Ensure that the VMs are configured with Premium SSD to achieve the highest possible SLA of 99.999%. In a future post, weâll cover how to configure Azure Application Gateway to load balance traffic across computers in an availability set as well as protecting against DDoS and OWASP top 10 attacks Learn more about Azure Availability Sets
74Views1like0CommentsEnd-to-End Observability for Azure Databricks: From Infrastructure to Internal Application Logging
Author's: Amudha Palani, Peter Lo PeterLoâ , Rafia Aqil Rafia_Aqilâ Observability in Azure Databricks is the ability to continuously monitor and troubleshoot the health, performance, and usage of data workloads by capturing metrics, logs, and traces. In a structured observability approach, we consider two broad categories of logging: Internal Databricks Logging (within the Databricks environment) and External Databricks Logging (leveraging Azure services). Each plays a distinct role in providing insights. By combining internal and external observability mechanisms, organizations can achieve a comprehensive view: internal logs enable detailed analysis of Spark jobs and data quality, while external logs ensure global visibility, auditing, and integration with broader monitoring dashboards and alerting systems. The article is organized into two main sections: Infrastructure Logging for Azure Databricks (external observability) Internal Databricks Logging (in-platform observability) Considerations Addressing key questions upfront ensures your observability strategy is tailored to your organizationâs unique workloads, risk profile, and operational needs. By proactively evaluating what to monitor, where to store logs, and who needs access, you can avoid blind spots, streamline incident response, and align monitoring investments with business priorities. What types of workloads are running in Databricks? Why it matters: Different workloads (e.g., batch ETL, streaming pipelines, ML training, interactive notebooks) have distinct performance profiles and failure modes. Business impact: Understanding workload types helps prioritize monitoring for mission-critical processes like real-time fraud detection or daily financial reporting. What failure scenarios need to be monitored? Examples: Job failures, cluster provisioning errors, quota limits, authentication issues. Business impact: Early detection of failures reduces downtime, improves SLA adherence, and prevents data pipeline disruptions that could affect reporting or customer-facing analytics. Where should logs be stored and analyzed? Options: Centralized Log Analytics workspace, Azure Storage for archival, Event Hub for streaming analysis. Business impact: Centralized logging enables unified dashboards, cross-team visibility, and faster incident response across data engineering, operations, and compliance teams. Who needs access to logs and alerts? Stakeholders: Data engineers, platform administrators, security analysts, compliance officers. Business impact: Role-based access ensures that the right teams can act on insights while maintaining data governance and privacy controls. Infrastructure Logging for Azure Databricks Approach 1: Diagnostic Settings for Azure Databricks Diagnostic settings in Azure Monitor allow you to capture detailed logs and metrics from your Azure Databricks workspace, supporting operational monitoring, troubleshooting, and compliance. By configuring diagnostic settings at the workspace level, administrators can route Databricks logsâincluding cluster events, job statuses, and audit logsâto destinations such as Log Analytics, Azure Storage, or Event Hub. This enables unified analysis, alerting, and long-term retention of critical operational data. Configuration Overview Enable Diagnostic Settings on the Databricks workspace to route logs to Log Analytics Workspace. Logs can also be combined with other logs mentioned below for full Azure Databricks observability. Here is a Guide to Azure Databricks Diagnostic Settings Log Reference: Configure diagnostic log delivery Implement tagging strategy-organizations can gain granular visibility into resource consumption and align spending with business priorities. Default tags: Automatically applied by Databricks to cloud-deployed resources. Custom tags: User-defined tags that you can add to compute resources and serverless workloads. Use Cases Operational Monitoring: Detect job or resource bottlenecks. Security & Compliance: Audit user actions and enforce governance policies. Incident Response: Correlate Databricks logs with infrastructure events for faster troubleshooting. Best Practices Enable only relevant log categories to optimize cost and performance. Use role-based access control (RBAC) to secure access to logs. Approach 2: Azure Databricks Compute Log Delivery Compute log delivery in Azure Databricks enables you to automatically collect and archive logs from Spark driver nodes, worker nodes, and cluster events for both all-purpose and job compute resources. When you create a cluster, you can specify a log delivery locationâsuch as DBFS, Azure Storage, or a Unity Catalog volumeâwhere logs are delivered every five minutes and archived hourly. All logs generated up until the compute resource is terminated are guaranteed to be delivered, supporting troubleshooting, auditing, and compliance. Configure: To configure the log delivery location: On the compute page, click the Advanced toggle. Click the Logging tab. Select a destination type: DBFS or Volumes (Preview). Enter the Log path. To store the logs, Databricks creates a subfolder in your chosen log path named after the compute's cluster_id. Approach 3: Azure Activity Logs Whenever you create, update, or delete Databricks resources (such as provisioning a new workspace, scaling a cluster, or modifying network settings), these actions are captured in the Activity Log. This enables teams to track who made changes, when, and what impact those changes had on the environment. Each event in the Activity Log has a particular category that is described in the following document: Azure Activity Log event schema. For Databricks, this is especially valuable for: Auditing resource deployments and configuration changes Investigating failed provisioning or quota errors Monitoring compliance with organizational policies Responding to incidents or unauthorized actions Use Cases Auditing infrastructure-level changes outside the Databricks workspace. Monitoring provisioning delays or resource availability. Best Practices Use Activity Logs in conjunction with other logs for full-stack visibility. Set up alerts for critical infrastructure events. Review logs regularly to ensure compliance and operational health. Approach 4: Azure Monitor VM Insights Azure Databricks cluster nodes run on Azure virtual machines (VMs), and their infrastructure-level performance can be monitored using Azure Monitor VM Insights (formerly OMS). This approach provides visibility into resource utilization across individual cluster VMs, helping identify bottlenecks that may affect Spark job performance or overall workload efficiency. Configuration Overview: To enable VM performance monitoring: Enable VM Insights on the Databricks cluster for VMs. Monitored Metrics: Once enabled, VM Insights collects: CPU usage, Memory consumption, Disk I/O, Network throughput, Process-level statistics. These metrics help assess whether Spark workloads are constrained by infrastructure limits, such as insufficient memory or high disk latency. Considerations This is a standard Azure VM monitoring technique and is not specific to Databricks. Use role-based access control (RBAC) to secure access to performance data. Approach 5: Virtual Network Flow Logs For Azure Databricks workspaces deployed in a custom Azure Virtual Network (VNet-injected mode), enabling Virtual Network Flow Logs provides deep visibility into IP traffic flowing through the virtual network. These logs help monitor and optimize resources or support large enterprises that are trying to detect intrusion, flow logs can help. Review common use cases here: Vnet Flow Logs Common Usecases and how logging works here: Key properties of virtual network flow logs. Follow these steps to setup Vnet Flow Logs: Create a flow log Configuration Overview Virtual Network Flow Logs are a feature of Azure Network Watcher. Optionally, logs can be analyzed using Traffic Analytics for deeper insights. These logs help identify: Unexpected or unauthorized traffic Bandwidth usage patterns Effectiveness of NSG rules and network segmentation Considerations NSG flow logging is only available for VNet-injected deployment modes. Ensure Network Watcher is enabled in the region where the Databricks workspace is deployed. Use Traffic Analytics to visualize trends and detect anomalies in network flows. Approach 6: Spark Monitoring Logging & Metrics The spark-monitoring library is a Python toolkit designed to interact with the Spark History Server REST API. Its main purpose is to help users programmatically access, analyze, and visualize Spark application metrics and job details after execution. Hereâs what it offers: Application Listing: Retrieve a list of all Spark applications available on the History Server, including metadata such as application ID, name, start/end time, and status. Job and Stage Details: Access detailed information about jobs and stages within each application, including execution times, status, and resource usage. Task Metrics: Extract metrics for individual tasks, such as duration, input/output size, and shuffle statistics, supporting performance analysis and bottleneck identification. Considerations The Spark Monitoring Library must be installed, see Git Repository here. Metrics can be exported to external observability platforms for long-term retention and alerting. Use cases Automated reporting of Spark job performance and resource usage Batch analysis of completed Spark applications Integration of Spark metrics into external dashboards or monitoring systems Post-execution troubleshooting and optimization Internal Databricks Logging Approach 7: Databricks System Tables (Unity Catalog) Databricks System Tables are a recent addition to Azure Databricks observability, offering structured, SQL-accessible insights into workspace usage, performance, and cost. These tables reside in the Unity Catalog and are organized into schemas such as system.billing, system.lakeflow, and system.compute. You can enable System Tables through these steps: _enable_system_tables - Databricks Overview and Capabilities When enabled by an administrator, system tables allow users to query operational metadata directly using SQL. Examples include: system.billing.usage: Tracks compute usage (CPU core-hours, memory) per job. system.compute.clusters: Captures cluster lifecycle events. system.lakeflow.job_run: Provides job execution details. Use Cases Cost Monitoring: Aggregate usage records to identify high-cost jobs or users. Import pre-built usage dashboards to your workspaces to monitor account- and workspace-level usage: Usage dashboards and Create and monitor budgets Operational Efficiency: Track job durations, cluster concurrency, and resource utilization. In-Platform BI: Build dashboards in Databricks SQL to visualize usage trends without relying on external billing tools. Best Practices Schedule regular queries to track cost trends, job performance, and resource usage. Apply role-based access control to restrict sensitive usage data. Integrate system table insights into Databricks SQL dashboards for real-time visibility. Approach 8: Data Quality Monitoring Data Quality Monitoring is a native Azure Databricks feature designed to track data quality and machine learning model performance over time. It enables automated monitoring of Delta tables and ML inference outputs, helping teams detect anomalies, data drift, and reliability issues directly within the Databricks environment. Follow these steps to enable Data Quality Monitoring. Data Quality Monitoring supports three profile types: Time Series: Monitors time-partitioned data, computing metrics per time window. Inference: Tracks prediction drift and anomalies in model request/response logs. Snapshot: Performs full-table scans to compute metrics across the entire dataset. From the enabling Lakehouse Monitoring, on step 5 you can also enable data profiling to view Data Profiling Dashboards. Use Cases Data Quality Monitoring: Track null values, column distributions, and schema changes. Model Performance Monitoring: Detect concept drift, prediction anomalies, and accuracy degradation. Operational Reliability: Ensure consistent data pipelines and ML inference behavior. Approach 9: Databricks SQL Dashboards and Alerts Databricks SQL Dashboards and Alerts provide in-platform observability for operational monitoring, enabling teams to visualize metrics and receive notifications based on SQL query results. This approach complements infrastructure-level monitoring by focusing on application-level conditions, data correctness, and workflow health. Users can build dashboards using Databricks SQL or SQL Warehouses by querying: System tables (e.g., job runs, billing usage), Data Quality Monitoring metric tables, Custom operational datasets. You can create alerts through these steps: Databricks SQL alerts. Alerting Features: Databricks SQL supports alerting on query results, allowing users to define conditions that trigger notifications via: Email, Slack (via webhook integration). Alerts can be configured for scenarios such as: Job failure counts exceeding thresholds Row count drops in critical tables Cost/Workload spikes or resource usage anomalies Considerations Alerts are query-driven and run on a schedule; ensure queries are optimized for performance. Dashboards and alerts are workspace-specific and require appropriate permissions. Best Practices Use system tables and Data Quality Monitoring metrics as data sources for dashboards. Schedule alerts to run at appropriate intervals (e.g., hourly for job failures). Combine internal alerts with external monitoring for full-stack coverage. Approach 10: Custom Tags for Workspace-Level Assets Custom tags allow organizations to classify and organize Databricks resources (clusters, jobs, pools, notebooks) for better governance, cost tracking, and observability. Tags are key-value pairs applied at the resource level and can be propagated to Azure for billing and monitoring. Why Use Custom Tags? Cost Attribution: Assign tags like Environment=Prod, Project=HealthcareAnalytics to track costs in Azure Cost Management. Governance: Enforce policies based on tags (e.g., restrict high-cost clusters to Environment=Dev). Observability: Filter logs and metrics by tags for dashboards and alerts. Taggable Assets Clusters: Apply tags during cluster creation via the Databricks UI or REST API. Jobs: Include tags in job configurations for workload-level tracking. Instance Pools: Tag pools to manage shared compute resources. Notebooks & Workflows: Use tags in metadata for classification and reporting. Best Practices Define a standard tag taxonomy (e.g., Environment, Owner, CostCenter, Compliance). Validate tags regularly to ensure consistency across workspaces. Use tags in Log Analytics queries for cost and performance dashboards.174Views0likes0CommentsGet to know the core Foundry solutions
Foundry includes specialized services for vision, language, documents, and search, plus Microsoft Foundry for orchestration and governance. Hereâs what each does and why it matters: Azure Vision With Azure Vision, you can detect common objects in images, generate captions, descriptions, and tags based on image contents, and read text in images. Example: Automate visual inspections or extract text from scanned documents. Azure Language Azure Language helps organizations understand and work with text at scale. It can identify key information, gauge sentiment, and create summaries from large volumes of content. It also supports building conversational experiences and question-answering tools, making it easier to deliver fast, accurate responses to customers and employees. Example: Understand customer feedback or translate text into multiple languages. Azure Document IntelligenceWith Azure Document Intelligence, you can use pre-built or custom models to extract fields from complex documents such as invoices, receipts, and forms. Example: Automate invoice processing or contract review. Azure SearchAzure Search helps you find the right information quickly by turning your content into a searchable index. It uses AI to understand and organize data, making it easier to retrieve relevant insights. This capability is often used to connect enterprise data with generative AI, ensuring responses are accurate and grounded in trusted information. Example: Help employees retrieve policies or product details without digging through files. Microsoft FoundryActs as the orchestration and governance layer for generative AI and AI agents. It provides tools for model selection, safety, observability, and lifecycle management. Example: Coordinate workflows that combine multiple AI capabilities with compliance and monitoring. Business leaders often ask: Which Foundry tool should I use? The answer depends on your workflow. For example: Are you trying to automate document-heavy processes like invoice handling or contract review? Do you need to improve customer engagement with multilingual support or sentiment analysis? Or are you looking to orchestrate generative AI across multiple processes for marketing or operations? Connecting these needs to the right Foundry solution ensures you invest in technology that delivers measurable results.Guide for Architecting Azure-Databricks: Design to Deployment
Author's: Chris Walk cwalkâ, Dan Johnson danjohn1234â, Eduardo dos Santos eduardomdossantosâ, Ted Kim tekimâ, Eric Kwashie ekwashieâ, Chris Haynes Chris_Haynesâ, Tayo Akigbogun takigbogunâ and Rafia Aqil Rafia_Aqilâ Peer Reviewed: Mohamed Sharaf mohamedsharafâ Note: This article does not cover the Serverless Workspace option, which is currently in Private Preview. We plan to update this article once Serverless Workspaces are Generally Available. Also, while Terraform is the recommended method for production deployments due to its automation and repeatability, for simplicity in this article we will demonstrate deployment through the Azure portal. DESIGN: Architecting a Secure Azure Databricks Environment Step 1: Plan Workspace, Subscription Organization, Analytics Architecture and Compute Planning your Azure Databricks environment can follow various arrangements depending on your organizationâs structure, governance model, and workload requirements. The following guidance outlines key considerations to help you design a well-architected foundation. 1.1 Align Workspaces with Business Units A recommended best practice is to align each Azure Databricks workspace with a specific business unit. This approachâoften referred to as the âBusiness Unit Subscriptionâ design patternâoffers several operational and governance advantages. Streamlined Access Control: Each unit manages its own workspace, simplifying permissions and reducing cross-team access risks. For example, Sales can securely access only their data and notebooks. Cost Transparency: Mapping workspaces to business units enables accurate cost attribution and supports internal chargeback models. Each workspace can be tagged to a cost center for visibility and accountability. Even within the same workspace, costs can be controlled using system tables that provide detailed usage metrics and resource consumption insights. Challenges to keep-in-mind: While per-BU workspaces have high impact, be mindful of workspace sprawl. If every small team spins up its own workspace, you might end up with dozens or hundreds of workspaces, which introduces management overhead. Databricks recommends a reasonable upper limit (on Azure, roughly 20â50 workspaces per account/subscription) because managing âcollaboration, access, and security across hundreds of workspaces can become extremely difficult, even with good automationâ [1]. Each workspace will need governance (user provisioning, monitoring, compliance checks), so there is a balance to strike. 1.2 Workspace Alignment and Shared Metastore Strategy As you align workspaces with business units, it's essential to understand how Unity Catalog and the metastore fit into your architecture. Unity Catalog is Databricksâ unified governance layer that centralizes access control, auditing, and data lineage across workspaces. Each Unity Catalog is backed by a metastore, which acts as the central metadata repository for tables, views, volumes, and other data assets. In Azure Databricks, you can have one metastore per region, and all workspaces within that region share it. This enables consistent governance and simplifies data sharing across teams. If your organization spans multiple regions, youâll need to plan for cross-region sharing, which Unity Catalog supports through Delta Sharing. By aligning workspaces with business units and connecting them to a shared metastore, you ensure that governance policies are enforced uniformly, while still allowing each team to manage its own data assets securely and independently. 1.3 Distribute Workspaces Across Subscriptions When scaling Azure Databricks, consider not just the number of workspaces, but also how to distribute them across Azure subscriptions. Using multiple Azure subscriptions can serve both organizational needs and technical requirements: Environment Segmentation (Dev/Test/Prod): A common pattern is to put production workspaces in a separate Azure subscription from development or test workspaces. This provides an extra layer of isolation. Microsoft highly recommends separating workspaces into prod and dev, in separate subscriptions. This way, you can apply stricter Azure policies or network rules to the prod subscription and keep the dev subscription a bit more open for experimentation without risking prod resources. Honor Azure Resource Limits: Azure subscriptions come with certain capacity limits and Azure Databricks workspaces have their own limits (since itâs a multi-tenant PaaS). If you put all workspaces in one subscription, or all teams in one workspace, you might hit those limits. Most enterprises naturally end up with multiple subscriptions as they grow â planning this early avoids later migration headaches. If you currently have everything in one subscription, evaluate usage and consider splitting off heavy workloads or prod workloads into a new one to adhere to best practices. 1.4 Consider Completing Azure Landing Zone Assessment When evaluating and planning your next deployment, itâs essential to ensure that your current landing zone aligns with Microsoft best practices. This helps establish a robust Databricks architecture and minimizes the risk of avoidable issues. Additionally, customers who are early in their cloud journey can benefit from Cloud Assessmentsâsuch as an Azure Landing Zone Review and a review of the âPrepare for Cloud Adoptionâ documentationâto build a strong foundation. 1.5 Planning Your Azure Databricks Workspace Architecture Your workspace architecture should reflect the operational model of your organization and support the workloads you intend to run, from exploratory notebooks to production-grade ETL pipelines. To support your planning, Microsoft provides several reference architectures that illustrate well-architected patterns for Databricks deployments. These solution ideas can serve as starting points for designing maintainable environments: Simplified Architecture: Modern Data Platform Architecture, ETL-Intensive Workload Reference Architecture: Building ETL Intensive Architecture, End-to-End Analytics Architecture: Create a Modern Analytics Architecture. 1.6 Planning for that âRightâ Compute Choosing the right compute setup in Azure Databricks is crucial for optimizing performance and controlling costs, as billing is based on Databricks Units (DBUs) using a per-second pricing model. Classic Compute: You can fine-tune your own compute by enabling auto-termination and autoscaling, using Photon acceleration, leveraging spot instances, selecting the right VM type and node count for your workload, and choosing SSDs for performance or HDDs for archival storage. Preferred by mature internal teams and developers who need advanced control over clustersâsuch as custom VM selection, tuning, and specialized configurations. Serverless Compute: Alternatively, managed services can simplify operations with built-in optimizations. Removes infrastructure management and offers instant scaling without cluster warm-up, making it ideal for agility and simplicity. Step 2: Plan the âRightâ CIDR Range (Classic Compute) Note: You can skip this step if you plan to use serverless compute for all your resources, as CIDR range planning is not required in serverless deployments. When planning CIDR ranges for your Azure Databricks workspace, it's important to ensure your virtual network has enough IP address capacity to support cluster scaling. Why this matters: If you choose a small VNet address space and your analytics workloads grow, you might hit a ceiling where you simply cannot launch more clusters or scale-out because there are no free IPs in the subnet. The subnet sizesâand by extension, the VNet CIDRâdetermine how many nodes you can. Databricks recommends using a CIDR block between /16 and /24 for the VNet, and up to /26 for the two required subnets: the container subnet and the host subnet. Hereâs a reference Microsoft provides. If your current workspaceâs VNet lacks sufficient IP space for active cluster nodes, you can request a CIDR range update through your Azure Databricks account team as noted in the Microsoft documentation. 2.1 Considerations for CIDR Range Workload Type & Concurrency: Consider what kinds of workloads will run (ETL Pipelines, Machine Learning Notebooks, BI Dashboards, etc.) and how many jobs or clusters may need to run in parallel. High concurrency (e.g. multiple ETL jobs or many interactive clusters) means more nodes running at the same time, requiring a larger pool of IP addresses. Data Volume (Historical vs. Incremental): Are you doing a one-time historical data load or only processing new incremental data? A large backfill of terabytes of data may require spinning up a very large cluster (hundreds of nodes) to process in a reasonable time. Ongoing smaller loads might get by with fewer nodes. Estimate how much data needs processing. Transformation Complexity: The complexity of data transformations or machine learning workloads matters. Heavy transformations (joins, aggregations on big data) or complex model training can benefit more workers. If your use cases include these, you may need larger clusters (more nodes) to meet performance SLAs, which in turn demands more IP addresses available in the subnet. Data Sources and Integration: Consider how your Databricks environment will connect to data. If you have multiple data sources or sinks (e.g. ingest from many event hubs, databases, or IoT streams), you might design multiple dedicated clusters or workflows, potentially all active at once. Also, if using separate job clusters per job (Databricks Jobs), multiple clusters might launch concurrently. All these scenarios increase concurrent node count. 2.2 Configuring a Dedicated Network (VNet) per Workspace with Egress Control By default, Azure Databricks deploys its classic compute resources into a Microsoft-managed virtual network (VNet) within your Azure subscription. While this simplifies setup, it limits control over network configuration. For enhanced security and flexibility, it's recommended to use VNet Injection, which allows you to deploy the compute plane into your own customer-managed VNet. This approach enables secure integration with other Azure services using service endpoints or private endpoints, supports user-defined routes for accessing on-premises data sources, allows traffic inspection via network virtual appliances or firewalls, and provides the ability to configure custom DNS and enforce egress restrictions through network security group (NSG) rules. Within this VNet (which must reside in the same region and subscription as the Azure Databricks workspace), two subnets are required for Azure Databricks: a container subnet (referred to as private subnet) and a host subnet (referred to as public subnet). To implement front-end Private Link, back-end Private Link, or both, your workspace VNet needs a third subnet that will contain the private endpoint (PrivateLink subnet). It is recommended to also deploy an Azure Firewall for egress control. Step 3: Plan Network Architecture for Securing Azure-Databricks 3.1 Secure Cluster Connectivity Secure Cluster Connectivity, also known as No Public IP (NPIP), is a foundational security feature for Azure Databricks deployments. When enabled, it ensures that compute resources within the customer-managed virtual network (VNet) do not have public IP addresses, and no inbound ports are exposed. Instead, each cluster initiates a secure outbound connection to the Databricks control plane using port 443 (HTTPS), through a dedicated relay. This tunnel is used exclusively for administrative tasks, separate from the web application and REST API traffic, significantly reducing the attack surface. For the most secure deployment, Microsoft and Databricks strongly recommend enabling Secure Cluster Connectivity, especially in environments with strict compliance or regulatory requirements. When Secure Cluster Connectivity is enabled, both workspace subnets become private, as cluster nodes donât have public IP addresses. 3.2 Egress with VNet Injection (NVA) For Databricks traffic, youâll need to assign a UDR to the Databricks-managed VNet with a next hop type of Network Virtual Appliance (NVA)âthis could be an Azure Firewall, NAT Gateway, or another routing device. For control plane traffic, Databricks recommends using Azure service tags, which are logical groupings of IP addresses for Azure services and should be routed with the next hop type of internet. This is important because Azure IP ranges can change frequently as new resources are provisioned, and manually maintaining IP lists is not practical. Using service tags ensures that your routing rules automatically stay up to date. 3.3 Front-End Connectivity with Azure Private Link (Standard Deployment) To further enhance security, Azure Databricks supports Private Link for front-end connections. In a standard deployment, Private Link enables users to access the Databricks web application, REST API, and JDBC/ODBC endpoints over a private VNet interface, bypassing the public internet. For organizations with no public internet access from user networks, a browser authentication private endpoint is required. This endpoint supports SSO login callbacks from Microsoft Entra ID and is shared across all workspaces in a region using the same private DNS zone. It is typically hosted in a transit VNet that bridges on-premises networks and Azure. Note: There are two deployment types: standard and simplified. To compare these deployment types, see Choose standard or simplified deployment. 3.4 Serverless Compute Networking Azure Databricks offers serverless compute options that simplify infrastructure management and accelerate workload execution. These resources run in a Databricks-managed serverless compute plane, isolated from the public internet and connected to the control plane via the Microsoft backbone network. To secure outbound traffic from serverless workloads, administrators can configure Serverless Egress Control using network policies that restrict connections by location, FQDN, or Azure resource type. Additionally, Network Connectivity Configurations (NCCs) allow centralized management of private endpoints and firewall rules. NCCs can be attached to multiple workspaces and are essential for enabling secure access to Azure services like Data Lake Storage from serverless SQL warehouses. DEPLOYMENT: Step-to-Step Implementation using Azure Portal Step 1: Create an Azure Resource Group For each new workspace, create a dedicated Resource Group (to contain the Databricks workspace resource and associated resources). Ensure that all resources are deployed in the same Region and Resource Group (i.e. workspace, subnets...) to optimize data movement performance and enhance security. Step 2: Deploy Workspace Specific Virtual Network (VNET) From your Resource Group, create a Virtual Network. Under the Security section, enable Azure Firewall. Deploying an Azure Firewall is recommended for egress control, ensuring that outbound traffic from your Databricks environment is securely managed. Define address spaces for your Virtual Network (Review Step 2 from Design). As documented, you could create a VNet with these values: IP range: First remove the default IP range, and then add IP range 10.28.0.0/23. Create subnet public-subnet with range 10.28.0.0/25. Create subnet private-subnet with range 10.28.0.128/25. Create subnet private-link with range 10.28.1.0/27. Please note: your IP values can be different depending on your IPAM and available scopes. Review + Create your Virtual Network. Step 3: Deploy Azure-Databricks Workspace: Now that networking is in place, create the Databricks workspace. Below are detailed steps your organization should review while creating workspace creation: In Azure Portal, search for Azure Databricks and click Create. Choose the Subscription, RG, Region, select Premium, enter in âManaged Resource Group nameâ and click Next. Managed Resource Group- will be created after your Databrick workspace is deployed and contains infrastructure resources for the workspace i.e. VNets, DBFS. Required: Enable âSecure Cluster Connectivityâ (No Public IP for clusters), to ensure that Databricks clusters are deployed without public IP addresses (Review Section 3.1). Required: Enable the option to deploy into your Virtual Network (VNet Injection), also known as âBring Your Own VNetâ (Review Section 3.2). Select the Virtual Network created in Step 2. Enter Private, Public Subnet Names. Enable or Disable âDeploying Nat Gatewayâ, according to your workspace requirement. Disable âAllow Public Network Accessâ. Select âNo Azure Databricks Rulesâ for Required NSG Rules. Select âClick on add to create a private endpointâ, this will open a panel for private endpoint setup. Click âAddâ to enter your Private Link details created in Step 2. Also, ensure that Private DNS zone integration is set to âYesâ and that a new Private DNS Zone is created, indicated by (New)privatelink.azuredatabricks.net. Unless an existing DNS zone for this purpose already exists. (Optional) Under Encryption Tab, Enable Infrastructure Encryption, if you have requirement for FIPS 140-2. It comes at a cost, it takes time to encrypt and decrypt. By default your data is already encrypted. If you have a standard regulatory requirement (ex. HIPAA). (Optional) Compliance security profile- for HIPAA. (Optional) Automatic cluster updates, First Sunday of every Month. Review + Create the workspace and wait for it to deploy. Step 4: Create a private endpoint to support SSO for web browser access: Note: This step is required when front-end Private Link is enabled, and client networks cannot access the public internet. After creating your Azure Databricks workspace, if you try to launch it without the proper Private Link configuration, you will see an error like the image below: This happens because the workspace is configured to block public network access, and the necessary Private Endpoints (including the browser_authentication endpoint for SSO) are not yet in place. Create Web-Auth Workspace Note: Deploy a âdummyâ: WEB_AUTH_DO_NOT_DELETE_<region> workspace in the same region as your production workspace. Purpose: Host the browser_authentication private endpoint (one required per region). Lock the workspace (Delete lock) to prevent accidental removal. Follow step 2 to create Virtual Network (Vnet) Follow step 3 and create a VNet injected âdummyâ workspace. Create Browser Authentication Private Endpoint In Azure Portal, Databricks workspace (dummy), Networking, Private endpoint connections, + Private endpoint. Resource step: Target sub-resource: browser_authentication Virtual Network step: VNet: Transit/Hub VNet (central network for Private Link) Subnet: Private Endpoint subnet in that VNet (not Databricks host subnets) DNS step: Integrate with Private DNS zone: Yes Zone: privatelink.azuredatabricks.net Ensure DNS zone is linked to the Transit VNet After creation: A-records for *.pl-auth.azuredatabricks.net are auto-created in the DNS zone. Workspace Connectivity Testing If you have VPN or ExpressRoute, Bastion is not required. However, for the purposes of this article we will be testing our workpace connectivity through Bastion. If you donât have private connectivity and need to test from inside the VNet, Azure Bastion is a convenient option. Step 5: Create Storage Account From your Resource Group, click Create and select Storage account. On the configuration page: Select Preferred Storage type as: Azure Blob Storage or Azure Data Lake Storage Gen 2. Choose Performance and Redundancy options based on your business requirements. Click Next to proceed. Under the Advanced tab: Enable Hierarchical namespace under Data Lake Storage Gen2. This is critical for: Directory and file-level operations, Access Control Lists (ACLs). Under the Networking tab: Set Public Network Access to Disabled. Complete the creation process and then create container(s) inside the storage account. Step 6: Create Private Endpoints for Workspace Storage Account Pre-requisite: You need to create two private endpoints from the VNet used for VNet injection to your workspace storage account for the following Target sub-resources: dfs and blob. Navigate to your Storage Account. Go to Networking, Private Endpoints tab and click on to + Create Private Endpoint. In the Create Private Endpoint wizard: Resource tab: Select your Storage Account. Set Target sub-resource to dfs for the first endpoint. Virtual Network tab: Choose the VNet you used for VNet injection. Select the appropriate subnet. Complete the creation process. The private endpoint will be auto approved and visible under Private Endpoints. Repeat the process for the second private endpoint: This time set Target sub-resource to blob. Step 7: Link Storage and Databricks Workspace: Create Access Connector In your Resource Group, create an Access Connector for Azure Databricks. No additional configuration is required during creation. Assign Role to Access Connector Navigate to your Storage Account, Access Control (IAM), Add role assignment. Select: Role: Storage Blob Data Contributor Assign access to: Managed Identity Under Members: Click Select members. Find and select your newly created Access Connector for Azure Databricks. Save the role assignment. Copy Resource ID Go to the Access Connector Overview page. Copy the Resource ID for later use in Databricks configuration. Step 8: Link Storage and Databricks Workspace: Navigate to Unity Catalog In your Databricks Workspace, go to Unity Catalog, External Data and select âCreate external Locationâ button. Configure External Location Select ADLS as the storage type. Enter the ADLS storage URL in the following format: abfss://<container_name>@<storage_account_name>.dfs.core.windows.net/ Update these two parameters: <container_name> and <storage_name> Provide Access Connector Select âCreate new storage credentialâ from Storage credential field. Paste the Resource ID of the Access Connector for Azure Databricks (from Step 10) into the Access Connector ID field. Validate Connection Click Submit. You should see a âSuccessfulâ message confirming the connection. Click submit and you should receive a âSuccessfulâ message, indicating your connection has succeeded. You can now create Catalogs and link your secure storage. Step 9: Configuring Serverless Compute Networking: If your organization plans to use Serverless SQL Warehouses or Serverless Jobs Compute, you must configure Serverless Networking. Add Network Connectivity Configuration (NCC) Go to the Databricks Account Console: https://accounts.azuredatabricks.net/ Navigate to Cloud resources, click Add Network Connectivity Configuration. Fill in the required fields and create a new NCC. Associate NCC with Workspace In the Account Console, go to Workspaces. Select your workspace, click Update Workspace. From the Network Connectivity Configuration dropdown, select the NCC you just created. Add Private Endpoint Rule In Cloud resources, select your NCC, select Private Endpoint Rules and click Add Private Endpoint Rule. Provide: Resource ID: Enter your Storage Account Resource ID. Note: this can be found from your storage account, click on âJSON Viewâ top right. Azure Subresource type: dfs & blob. Approve Pending Connection Go to your Storage Account, Networking, Private Endpoints. You will see a Pending connection from Databricks. Approve the connection and you will see the Connection status in your Account Console as ESTABLISHED. Step 10: Test Your Workspace: Launch a small test cluster and verify the following: It can start (which means it can talk to the control plane). It can read/write from the storage, following the following code to confirm read/write to storage: Set Spark properties to configure Azure credentials to access Azure storage. Check Private DNS Record has been created. (Optional) If on-prem data is needed: try connecting to an on-prem database (using the ExpressRoute path): Connect your Azure Databricks workspace to your on-premises network - Azure Databricks | Microsoft Learn. Step 11: Account Console, Planning Workspace Access Controls and Getting Started: Once your Azure Databricks workspace is deployed, it's essential to configure access controls and begin onboarding users with the right permissions. From your account console: https://accounts.azuredatabricks.net/, you can centrally manage your environment: add users and groups, enable preview features, and view or configure all your workspaces. Azure Databricks supports fine-grained access management through Unity Catalog, cluster policies, and workspace-level roles. Start by defining who needs access to whatâwhether it's notebooks, tables, jobs, or clustersâand apply least-privilege principles to minimize risk. DBFS Limitation: DBFS is automatically created upon Databricks Workspace creation. DBFS can be found in your Managed Resource Group. Databricks cannot secure DBFS (see reference image below). If there is a business need to avoid DBFS then you can disable DBFS access following instructions here: Disable access to DBFS root and mounts in your existing Azure Databricks workspace. Use Unity Catalog to manage data access across catalogs, schemas, and tables, and consider implementing cluster policies to standardize compute configurations across teams. To help your teams get started, Microsoft provides a range of tutorials and best practice guides: Best practice articles - Azure Databricks | Microsoft Learn. Step 12: Planning Data Migration: As you prepare to move data into your Azure Databricks environment, it's important to assess your migration strategy early. This includes identifying source systems, estimating data volumes, and determining the appropriate ingestion methodsâwhether batch, streaming, or hybrid. For organizations with complex migration needs or legacy systems, Microsoft offers specialized support through its internal Azure Cloud Accelerated Factory program. Reach out to your Microsoft account team to explore nomination for Azure Cloud Accelerated Factory, which provides hands-on guidance, tooling, and best practices to accelerate and streamline your data migration journey. Summary Regular maintenance and governance are as important as the initial design. Continuously review the environment and update configurations as needed to address evolving requirements and threats. For example, tag all resources (workspaces, VNets, clusters, etc.) with clear identifiers (workspace name, environment, department) to track costs and ownership effectively. Additionally, enforce least privilege across the platform: ensure that only necessary users are given admin privileges, and use cluster-level access control to restrict who can create or start clusters. By following the above steps, an organization will have an Azure Databricks architecture that is securely isolated, well-governed, and scalable. References: [1] 5 Best Practices for Databricks Workspaces AzureDatabricksBestPractices/toc.md at master ¡ Azure ... - GitHub Deploy a workspace using the Azure Portal Additional Links: Quick Introduction to Databricks: what is databricks | introduction - databricks for dummies Connect Purview with Azure Databricks: Integrating Microsoft Purview with Azure Databricks Secure Databricks Delta Share between Workspaces: Secure Databricks Delta Share for Serverless Compute Azure-Databricks Cost Optimization Guide: Databricks Cost Optimization: A Practical Guide Integrate Azure Databricks with Microsoft Fabric: Integrating Azure Databricks with Microsoft Fabric Databricks Solution Accelerators for Data & AI Azure updates Appendix 3.5 Understanding Data Transfer (Express Route vs. Public Internet) For data transfers, your organization must decide to use ExpressRoute or Internet Egress. There are several considerations that can help you determine your choice: 3.5.1. Connectivity Model ⢠ExpressRoute: Provides a private, dedicated connection between your on-premises infrastructure and Microsoft Azure. It bypasses the public internet entirely and connects through a network service provider. ⢠Internet Egress: Refers to outbound data traffic from Azure to the public internet. This is the default path for most Azure services unless configured otherwise. 3.6 Planning for User-Defined Routes (UDRs) When working with Databricks deploymentsâespecially in VNet-injected workspacesâsetting up User Defined Routes (UDRs) is a smart move. Itâs a best practice that helps manage and secure network traffic more effectively. By using UDRs, teams can steer traffic between Databricks components and external services in a controlled way, which not only boosts security but also supports compliance efforts. 3.6.1 UDRs and Hub and Spoke Topology If your Databricks workspace is deployed into your own virtual network (VNet), youâll need to configure standard user-defined routes (UDRs) to manage traffic flow. In a typical hub-and-spoke architecture, UDRs are used to route all traffic from the spoke VNets to the hub VNet. 3.6.2 Hub and Spoke with VWANHUB If your Databricks workspace is deployed into your own virtual network (VNet) and is peered to a Virtual WAN (VWAN) hub as the primary connectivity hub into Azure, a user-defined route (UDR) is not requiredâprovided that a private traffic routing policy or internet traffic routing policy is configured in the VWAN hub. 3.6.3 Use of NVAs and Service Tags For Databricks traffic, youâll need to assign a UDR to the Databricks-managed VNet with a next hop type of Network Virtual Appliance (NVA)âthis could be an Azure Firewall, NAT Gateway, or another routing device. For control plane traffic, Databricks recommends using Azure service tags, which are logical groupings of IP addresses for Azure services and should be routed with the next hop type of internet. This is important because Azure IP ranges can change frequently as new resources are provisioned, and manually maintaining IP lists is not practical. Using service tags ensures that your routing rules automatically stay up to date. 3.6.4 Default Outbound Access Retirement (Non-Serverless Compute) Microsoft is retiring default outbound internet access for new deployments starting September 30,2025. Going forward, outbound connectivity will require an explicit configuration using an NVA, NAT Gateway, Load Balancer, or Public IP address. Also, note that using a Public IP Address in the deployment is discouraged for Security purposes, and it is recommended to deploy the workspace in a âSecure Cluster Connectivity ration.â Configure connectivity will require an explicit configuration using an NVA, NAT Gateway, Load Balancer, or Public IP address. Also, note that using a Public IP Address in the deployment is discouraged for Security purposes, and it is recommended to deploy the workspace in a âSecure Cluster Connectivity ration.â1KViews1like0Comments