data & ai
206 TopicsAzure AI Foundry vs. Azure Databricks – A Unified Approach to Enterprise Intelligence
Key Insights into Azure AI Foundry and Azure Databricks Complementary Powerhouses: Azure AI Foundry is purpose-built for generative AI application and agent development, focusing on model orchestration and rapid prototyping, while Azure Databricks excels in large-scale data engineering, analytics, and traditional machine learning, forming the data intelligence backbone. Seamless Integration for End-to-End AI: A critical native connector allows AI agents developed in Foundry to access real-time, governed data from Databricks, enabling contextual and data-grounded AI solutions. This integration facilitates a comprehensive AI lifecycle from data preparation to intelligent application deployment. Specialized Roles for Optimal Performance: Enterprises leverage Databricks for its robust data processing, lakehouse architecture, and ML model training capabilities, and then utilize AI Foundry for deploying sophisticated generative AI applications, agents, and managing their lifecycle, ensuring responsible AI practices and scalability. In the rapidly evolving landscape of artificial intelligence, organizations seek robust platforms that can not only handle vast amounts of data but also enable the creation and deployment of intelligent applications. Microsoft Azure offers two powerful, yet distinct, services in this domain: Azure AI Foundry and Azure Databricks. While both contribute to an organization's AI capabilities, they serve different primary functions and are designed to complement each other in building comprehensive, enterprise-grade AI solutions. Decoding the Core Purpose: Foundry for Generative AI, Databricks for Data Intelligence At its heart, the distinction between Azure AI Foundry and Azure Databricks lies in their core objectives and the types of workloads they are optimized for. Understanding these fundamental differences is crucial for strategic deployment and maximizing their combined potential. Azure AI Foundry: The Epicenter for Generative AI and Agents Azure AI Foundry emerges as Microsoft's unified platform specifically engineered for the development, deployment, and management of generative AI applications and AI agents. It represents a consolidation of capabilities from what were formerly Azure AI Studio and Azure OpenAI Studio. Its primary focus is on accelerating the entire lifecycle of generative AI, from initial prototyping to large-scale production deployments. Key Characteristics of Azure AI Foundry: Generative AI Focus: Foundry streamlines the development of large language models (LLMs) and customized generative AI applications, including chatbots and conversational AI. It emphasizes prompt engineering, Retrieval-Augmented Generation (RAG), and agent orchestration. Extensive Model Catalog: It provides access to a vast catalog of over 11,000 foundation models from various publishers, including OpenAI, Meta (Llama 4), Mistral, and others. These models can be deployed via managed compute or serverless API deployments, offering flexibility and choice. Agentic Development: A significant strength of Foundry is its support for building sophisticated AI agents. This includes tools for grounding agents with knowledge, tool calling, comprehensive evaluations, tracing, monitoring, and guardrails to ensure responsible AI practices. Foundry Local further extends this by allowing offline and on-device development. Unified Development Environment: It offers a single management grouping for agents, models, and tools, promoting efficient development and consistent governance across AI projects. Enterprise Readiness: Built-in capabilities such as Role-Based Access Control (RBAC), observability, content safety, and project isolation ensure that AI applications are secure, compliant, and scalable for enterprise use. Figure 1: Conceptual Architecture of Azure AI Foundry illustrating its various components for AI development and deployment. Azure Databricks: The Powerhouse for Data Engineering, Analytics, and Machine Learning Azure Databricks, on the other hand, is an Apache Spark-based data intelligence platform optimized for large-scale data engineering, analytics, and traditional machine learning workloads. It acts as a collaborative workspace for data scientists, data engineers, and ML engineers to process, analyze, and transform massive datasets, and to build and deploy diverse ML models. Key Characteristics of Azure Databricks: Unified Data Analytics Platform: Central to Databricks is its lakehouse architecture, built on Delta Lake, which unifies data warehousing and data lakes. This provides a single platform for data engineering, SQL analytics, and machine learning. Big Data Processing: Excelling in distributed computing, Databricks is ideal for processing large datasets, performing ETL (Extract, Transform, Load) operations, and real-time analytics at scale. Comprehensive ML and AI Workflows: It offers a specialized environment for the full ML lifecycle, including data preparation, feature engineering, model training (both classic and deep learning), and model serving. Tools like MLflow are integrated for tracking, evaluating, and monitoring ML models. Data Intelligence Features: Databricks includes AI-assistive features such as Databricks Assistant and Databricks AI/BI Genie, which enable users to interact with their data using natural language queries to derive insights. Unified Governance with Unity Catalog: Unity Catalog provides a centralized governance solution for all data and AI assets within the lakehouse, ensuring data security, lineage tracking, and access control. Figure 2: The Databricks Data Intelligence Platform with its unified approach to data, analytics, and AI. The Symbiotic Relationship: Integration and Complementary Use Cases While distinct in their primary functions, Azure AI Foundry and Azure Databricks are explicitly designed to work together, forming a powerful, integrated ecosystem for end-to-end AI development and deployment. This synergy is key to building advanced, data-driven AI solutions in the enterprise. Seamless Integration for Enhanced AI Capabilities The integration between the two platforms is a cornerstone of Microsoft's AI strategy, enabling AI agents and generative applications to be grounded in high-quality, governed enterprise data. Key Integration Points: Native Databricks Connector in AI Foundry: A significant development in 2025 is the public preview of a native connector that allows AI agents built in Azure AI Foundry to directly query real-time, governed data from Azure Databricks. This means Foundry agents can leverage Databricks AI/BI Genie to surface data insights and even trigger Databricks Jobs, providing highly contextual and domain-aware responses. Data Grounding for AI Agents: This integration enables AI agents to access structured and unstructured data processed and stored in Databricks, providing the necessary context and knowledge base for more accurate and relevant generative AI outputs. All interactions are auditable within Databricks, maintaining governance and security. Model Crossover and Availability: Foundation models, such as the Llama 4 family, are made available across both platforms. Databricks DBRX models can also appear in the Foundry model catalog, allowing flexibility in where models are trained, deployed, and consumed. Unified Identity and Governance: Both platforms leverage Azure Entra ID for authentication and access control, and Unity Catalog provides unified governance for data and AI assets managed by Databricks, which can then be respected by Foundry agents. Here's a breakdown of how a typical flow might look: Mindmap 1: Illustrates the complementary roles and integration points between Azure Databricks and Azure AI Foundry within an end-to-end AI solution. When to Use Which (and When to Use Both) Choosing between Azure AI Foundry and Azure Databricks, or deciding when to combine them, depends on the specific requirements of your AI project: Choose Azure AI Foundry When You Need To: Build and deploy production-grade generative AI applications and multi-agent systems. Access, evaluate, and benchmark a wide array of foundation models from various providers. Develop AI agents with sophisticated capabilities like tool calling, RAG, and contextual understanding. Implement enterprise-grade guardrails, tracing, monitoring, and content safety for AI applications. Rapidly prototype and iterate on generative AI solutions, including chatbots and copilots. Integrate AI agents deeply with Microsoft 365 and Copilot Studio. Choose Azure Databricks When You Need To: Perform large-scale data engineering, ETL, and data warehousing on a unified lakehouse. Build and train traditional machine learning models (supervised, unsupervised learning, deep learning) at scale. Manage and govern all data and AI assets centrally with Unity Catalog, ensuring data quality and lineage. Conduct complex data analytics, business intelligence (BI), and real-time data processing. Leverage AI-assistive tools like Databricks AI/BI Genie for natural language interaction with data. Require high-performance compute and auto-scaling for data-intensive workloads. Use Both for Comprehensive AI Solutions: The most powerful approach for many enterprises is to leverage both platforms. Azure Databricks can serve as the robust data backbone, handling data ingestion, processing, governance, and traditional ML model training. Azure AI Foundry then sits atop this foundation, consuming the prepared and governed data to build, deploy, and manage intelligent generative AI agents and applications. This allows for: Domain-Aware AI: Foundry agents are grounded in enterprise-specific data from Databricks, leading to more accurate, relevant, and trustworthy AI responses. End-to-End AI Lifecycle: Databricks manages the "data intelligence" part, and Foundry handles the "generative AI application" part, covering the entire spectrum from raw data to intelligent user experience. Optimized Resource Utilization: Each platform focuses on what it does best, leading to more efficient resource allocation and specialized toolsets for different stages of the AI journey. Comparative Analysis: Features and Capabilities To further illustrate their distinct yet complementary nature, let's examine a detailed comparison of their features, capabilities, and typical user bases. Radar Chart 1: This chart visually compares Azure AI Foundry and Azure Databricks across several key dimensions, illustrating their specialized strengths. Azure AI Foundry excels in generative AI and agent orchestration, while Azure Databricks dominates in data engineering, unified data governance, and traditional ML workflows. A Detailed Feature Comparison Feature Category Azure AI Foundry Azure Databricks Primary Focus Generative AI application & agent development, model orchestration Large-scale data engineering, analytics, traditional ML, and AI workflows Data Handling Connects to diverse data sources (e.g., Databricks, Azure AI Search) for grounding AI agents. Not a primary data storage/processing platform. Native data lakehouse architecture (Delta Lake), optimized for big data processing, storage, and real-time analytics. AI/ML Capabilities Foundation models (LLMs), prompt engineering, RAG, agent orchestration, model evaluation, content safety, responsible AI tooling. Traditional ML (supervised/unsupervised), deep learning, feature engineering, MLflow for lifecycle management, Databricks AI/BI Genie. Development Style Low-code agent building, prompt flows, unified SDK/API, templates. Code-first (Python, SQL, Scala, R), notebooks, IDE integrations. Model Access & Deployment Extensive model catalog (11,000+ models), serverless API, managed compute deployments, model benchmarking. Training and serving custom ML models, including deep learning. Models available for deployment through MLflow. Governance & Security Azure-based security & compliance, RBAC, project isolation, content safety guardrails, tracing, evaluations. Unity Catalog for unified data & AI governance, lineage tracking, access control, Entra ID integration. Key Users AI developers, business analysts, citizen developers, AI app builders. Data scientists, data engineers, ML engineers, data analysts. Integration Points Native connector to Databricks AI/BI Genie, Azure AI Search, Microsoft 365, Copilot Studio, Power Platform. Microsoft Fabric, Power BI, Azure AI Foundry, Azure Purview, Azure Monitor, Azure Key Vault. Table 1: A comparative overview of the distinct features and functionalities of Azure AI Foundry and Azure Databricks Concluding Thoughts In essence, Azure AI Foundry and Azure Databricks are not competing platforms but rather essential components of a unified, comprehensive AI strategy within the Azure ecosystem. Azure Databricks provides the robust, scalable foundation for all data engineering, analytics, and traditional machine learning workloads, acting as the "data intelligence platform." Azure AI Foundry then leverages this foundation to specialize in the rapid development, deployment, and operationalization of generative AI applications and intelligent agents. Together, they enable enterprises to unlock the full potential of AI, transforming raw data into powerful, domain-aware, and governed intelligent solutions. Frequently Asked Questions (FAQ) What is the main difference between Azure AI Foundry and Azure Databricks? Azure AI Foundry is specialized for building, deploying, and managing generative AI applications and AI agents, focusing on model orchestration and prompt engineering. Azure Databricks is a data intelligence platform for large-scale data engineering, analytics, and traditional machine learning, built on a Lakehouse architecture. Can Azure AI Foundry and Azure Databricks be used together? Yes, they are designed to work synergistically. Azure AI Foundry can leverage a native connector to access real-time, governed data from Azure Databricks, allowing AI agents to be grounded in enterprise data for more accurate and contextual responses. Which platform should I choose for training large machine learning models? For training large-scale, traditional machine learning, and deep learning models, Azure Databricks is generally the preferred choice due to its robust capabilities for data processing, feature engineering, and ML lifecycle management (MLflow). Azure AI Foundry focuses more on the deployment and orchestration of pre-trained foundation models and generative AI applications. Does Azure AI Foundry replace Azure Machine Learning or Databricks? No, Azure AI Foundry complements these services. It provides a specialized environment for generative AI and agent development, often integrating with data and models managed by Azure Databricks or Azure Machine Learning for comprehensive AI solutions. How do these platforms handle data governance? Azure Databricks utilizes Unity Catalog for unified data and AI governance, providing centralized control over data access and lineage. Azure AI Foundry integrates with Azure-based security and compliance features, ensuring responsible AI practices and data privacy within its generative AI applications.295Views0likes0CommentsJoin the Fabric Partner Community for this Week's Fabric Engineering Connection calls!
Are you a Microsoft partners that is interested in data and analytics? Be sure to join us for the next Fabric Engineering Connection calls! 🎉 The Americas & EMEA call will take place Wednesday, October 22, from 8-9 am PT and will feature presentations from Teddy Bercovitz and Gerd Saurer on Fabric Extend Workload Developer Kit, followed by a presentation on Data Protection Capabilities from Yael Biss. The APAC call is Thursday, October 23, from 1-2 am UTC/Wednesday, October 2, from 5-6 pm PT. Tamer Farag, Trilok Rajesh and Shreya Ghosh will be presenting on Modernizing Legacy Analytics & BI Platforms. This is your opportunity to learn more, ask questions, and provide feedback. To join the call, you must be a member of the Fabric Partner Community Teams channel. To join, complete the participation form at https://aka.ms/JoinFabricPartnerCommunity. We look forward to seeing you later this week!63Views2likes0CommentsJoin the Fabric Partner Community for this Week's Fabric Engineering Connection calls!
Are you a Microsoft partner that is interested in data and analytics? Be sure to join us for the next Fabric Engineering Connection calls! 🎉 Sujata Narayana will be sharing a recap of Power BI announcements from FabCon Europe, followed by the latest updates on AI Functions from Virginia Roman. The Americas & EMEA call will take place Wednesday, October 15, from 8-9 am PT and the APAC call is Thursday, October 16, from 1-2 am UTC/Wednesday, October 15, from 5-6 pm PT. This is your opportunity to learn more, ask questions, and provide feedback. To join the call, you must be a member of the Fabric Partner Community Teams channel. To join, complete the participation form at https://aka.ms/JoinFabricPartnerCommunity. We look forward to seeing you later this week!27Views1like0CommentsDiagnose performance issues in Spark jobs through Spark UI.
Agenda Introduction Overview of Spark UI Navigating to Spark UI Jobs Timeline Opening Jobs Timeline Reading Event Timeline Failing Jobs or Executors Diagnosing Failing Jobs Diagnosing Failing Executors Scenario - Memory Issues Scenario - Long Running Jobs Scenario - Identifying Longest Stage Introduction Diagnosing performance issues of job using Spark UI This guide walks you through how to use the Spark UI to diagnose performance issues Overview of Spark UI Job Composition Composed of multiple stages Stages may contain more than one task Task Breakdown Tasks are broken into executors Navigating to Spark UI: Navigating to Cluster's Page Navigate to your cluster’s page: Navigating to Spark UI: Clicking Spark UI Click Spark UI: Jobs Timeline Jobs timeline The jobs timeline is a great starting point for understanding your pipeline or query. It gives you an overview of what was running, how long each step took, and if there were any failures along the way Opening Jobs Timeline Accessing the Jobs Timeline Navigate to the Spark UI Click on the Jobs tab Viewing the Event Timeline Click on Event Timeline Highlighted in red in the screenshot Example Timeline Shows driver and executor 0 being added Failing Jobs or Executors: Example of Failed Job Failed Job Example Indicated by a red status Shown in the event timeline Removed Executors Also indicated by a red status Shown in the event timeline Failing Jobs or Executors: Common Reasons for Executors Being Removed Autoscaling Expected behavior, not an error See Enable autoscaling for more details Compute configuration reference - Azure Databricks | Microsoft Learn Spot instance losses Cloud provider reclaiming your VMs Learn more about Spot instances here Executors running out of memory Diagnosing Failing Jobs: Steps to Diagnose Failing Jobs Identifying Failing Jobs Click on the failing job to access its page Reviewing Failure Details Scroll down to see the failed stage Check the failure reason Diagnosing Failing Jobs: Generic Errors You may get a generic error. Click on the link in the description to see if you can get more info: Diagnosing Failing Jobs: Memory Issues Task Failure Explanation Scroll down the page to see why each task failed Memory issue identified as the cause Scenario – Spot instance , Auto-scaling Diagnosing Failing Executors: Checking Event Log Check Event Log Identify any explanations for executor failures Spot Instances Cloud provider may reclaim spot instances Diagnosing Failing Executors: Navigating to Executors Tab Check Event Log for Executor Loss Look for messages indicating cluster resizing or spot instance loss Navigate to Spark UI Click on the Executors tab Diagnosing Failing Executors: Getting Logs from Failed Executors Here you can get the logs from the failed executors: Scenario - Memory Issues Memory Issues Common cause of problems Requires thorough investigation Quality of Code Potential source of memory issues Needs to be checked for efficiency Data Quality Can affect memory usage Must be organized correctly Spark memory issues - Azure Databricks | Microsoft Learn Identifying Longest Stage Identify the longest stage of the job Scroll to the bottom of the job’s page Locate the list of stages Order the stages by duration Identifying Longest Stage Identify the longest stage of the job Scroll to the bottom of the job’s page Locate the list of stages Order the stages by duration Stage I/O Details High-Level Data Overview Input Output Shuffle Read Shuffle Write Number of Tasks in Long Stage Identifying the number of tasks Helps in pinpointing the issue Look at the specified location to determine the number of tasks Investigating Stage Details Investigate Further if Multiple Tasks Check if the stage has more than one task Click on the link in the stage’s description Get More Info About Longest Stage Click on the link provided Gather detailed information Conclusion Potential Data Skew Issues Data skew can impact performance May cause uneven distribution of data Spelling Errors in Data Incorrect spelling can affect data processing Ensure data accuracy for optimal performance Learn More Navigate to Skew and Spill - Skew and spill - Azure Databricks | Microsoft LearnJoin the Fabric Partner Community for this Week's Fabric Engineering Connection calls!
Are you a Microsoft partner that is interested in data and analytics? Be sure to join us for the next Fabric Engineering Connection calls! 🎉 Tom Peplow will be discussing OneLake Diagnostics and Sarab Dua will be joining to cover recent releases and roadmap for network security. Both promise to be presentations you won't want to miss! The Americas & EMEA call will take place Wednesday, October 8, from 8-9 am PT and the APAC call is Thursday, October 9, from 1-2 am UTC/Wednesday, October 8, from 5-6 pm PT. This is your opportunity to learn more, ask questions, and provide feedback. To join the call, you must be a member of the Fabric Partner Community Teams channel. To join, complete the participation form at https://aka.ms/JoinFabricPartnerCommunity. We look forward to seeing you later this week!34Views1like0CommentsReducing SQL Connection Latency for Apps Using Azure AAD Authentication
Challenge: connection latency and token overhead Consider a cloud-native application deployed in Azure App Service or Kubernetes (AKS) that needs to query an Azure SQL Database for real-time data. The application uses Azure Active Directory (AAD) for secure authentication, but every time the application establishes a new connection to the database, it requests a new AAD token. In high-traffic environments where thousands of requests are processed per second, this repetitive token issuance introduces latency and performance degradation. This delay becomes particularly problematic for time-sensitive applications where every millisecond counts. Each token request impacts response times and creates unnecessary resource consumption. Solution: token caching and expiration management To mitigate these delays, we can optimize the authentication process by caching the AAD token and reusing it for the duration of its validity (typically 1 hour to 24 hours). Instead of requesting a new token for every database connection, the token is fetched only when the existing one is near expiration. This approach eliminates the repeated authentication overhead and ensures that the application can maintain seamless connectivity to the database without the performance hit of generating a new token for each request. In addition to reducing latency, this approach reduces the number of HTTP calls made to the Azure Active Directory service, resulting in better resource utilization and lower operational costs. Concrete performance gains: optimized SQL client connection As part of the mitigation, we provide a custom code implementation that uses SqlClient, a supported library, to optimize the connection time. The test was conducted with the S0 database, where using a single process and using connection pooling, we opened a connection, executed the SELECT 1, and closed the connection. During the testing phase with a connection pooler script running for 96 hours (without the AAD token cache), the following results were observed: 10 connections took 1 second, representing 0.866% of total connections. 1 connection took 4 seconds, representing 0.0866%. 1.144 connections took less than 1 second, representing 99.05% of total connections. All executions of SELECT 1 were completed in 0 seconds. These results demonstrate how caching AAD tokens and reusing them effectively reduced connection overhead and improved performance. None of the connections exceeded 5 seconds in duration, while with the default behavior, connections were reaching 30 seconds and more, depending on the environment complexity. Step-by-step implementation Here’s a step-by-step guide on how to implement this solution using C# and the Microsoft.Data.SqlClient package to optimize SQL database connections: Obtain and cache a token: Instead of requesting a new AAD token with every connection, we obtain a token once and cache it. This is done by leveraging Azure Managed Identity to authenticate the application, which eliminates the need to repeatedly authenticate with Azure Active Directory for every database connection. In this step, we fetch the token once and store it securely for reuse. Renew the token only when it’s near expiry We will refresh the token only when it is nearing expiration or has already expired. The application checks the token’s expiration time before attempting to use it. If the token is still valid, it continues to be reused. If it's close to expiration, a new token is fetched. Reuse a single token across multiple connections: The cached token can be used for multiple database connections during its lifetime. Rather than requesting a new token for each new connection, the application will use the same token across all connections until the token is about to expire. Code example: optimized SQL connection management Here’s an example of how you can implement token caching in a C# application using Microsoft.Data.SqlClient. using System; using System.Data.SqlClient; using System.Diagnostics; using System.Threading; using Azure.Identity; namespace SqlConnectionOptimization { public class SqlConnectionManager { private string _accessToken; private DateTimeOffset _tokenExpiration; private readonly string _connectionString = "Server=tcp:servername.database.windows.net,1433;Initial Catalog=DBName;..."; private readonly Stopwatch _stopwatch = new Stopwatch(); public SqlConnectionManager() { _accessToken = string.Empty; _tokenExpiration = DateTimeOffset.UtcNow; } public void Run() { while (true) { // Refresh token if necessary if (IsTokenExpired()) { RefreshToken(); } // Establish connection and perform operations using (var connection = CreateConnection()) { LogExecutionTime("Connected"); ExecuteQuery(connection); LogExecutionTime("Query Executed"); } // Simulate some idle time between operations Log("Waiting before next operation..."); Thread.Sleep(1000); } } private bool IsTokenExpired() { return string.IsNullOrEmpty(_accessToken) || DateTimeOffset.UtcNow.AddMinutes(5) >= _tokenExpiration; } private void RefreshToken() { _stopwatch.Start(); try { var result = FetchAccessToken(); _accessToken = result.Token; _tokenExpiration = result.Expiration; LogExecutionTime("Token Refreshed"); Log($"Token expires at: {_tokenExpiration}"); } catch (Exception ex) { Log($"Error fetching token: {ex.Message}"); } } private (string Token, DateTimeOffset Expiration) FetchAccessToken() { var managedIdentityCredential = new ManagedIdentityCredential(); var tokenRequestContext = new Azure.Core.TokenRequestContext(new[] { "https://database.windows.net/" }); var accessToken = managedIdentityCredential.GetTokenAsync(tokenRequestContext).Result; return (accessToken.Token, accessToken.ExpiresOn.UtcDateTime); } private SqlConnection CreateConnection() { var connection = new SqlConnection(_connectionString) { AccessToken = _accessToken }; int retries = 0; while (true) { try { connection.Open(); return connection; } catch (Exception ex) { retries++; if (retries > 5) { Log($"Error connecting after multiple retries: {ex.Message}"); throw; } Log($"Connection attempt failed. Retrying in {retries} seconds..."); Thread.Sleep(retries * 1000); } } } private void ExecuteQuery(SqlConnection connection) { var query = "SELECT 1"; // Simple query, replace with real logic as needed int retries = 0; while (true) { try { using (var command = new SqlCommand(query, connection)) { command.CommandTimeout = 5; // Adjust timeout for more complex queries command.ExecuteScalar(); } return; } catch (Exception ex) { retries++; if (retries > 5) { Log($"Max retries reached for query execution: {ex.Message}"); throw; } Log($"Query execution failed. Retrying in {retries} seconds..."); Thread.Sleep(retries * 1000); } } } private void Log(string message) { Console.WriteLine($"{DateTime.Now:yyyy-MM-dd HH:mm:ss.fff}: {message}"); } private void LogExecutionTime(string action) { _stopwatch.Stop(); var elapsed = _stopwatch.Elapsed; Log($"{action} - Elapsed time: {elapsed:hh\\:mm\\:ss\\.fff}"); _stopwatch.Reset(); } public static void Main(string[] args) { var manager = new SqlConnectionManager(); manager.Run(); } } } Key points in the code Token Expiration Check: The IsTokenExpired() method checks whether the token has expired by comparing it to the current time. We’ve added a 5-minute buffer for token expiration. This can be adjusted based on your needs. Managed Identity Authentication: The application uses Azure Managed Identity to authenticate and fetch the token, ensuring secure and scalable access to Azure SQL Database without requiring client secrets. Retry Logic: In the event of a connection failure or query execution failure, the system retries a set number of times with exponential backoff, making it resilient to transient network or authentication issues. Conclusion By implementing a token caching and expiration management strategy, applications can dramatically improve the performance and scalability of their database interactions, especially in environments with high request volumes. By leveraging Azure Managed Identity for secure, reusable tokens, you can reduce authentication latency and improve the overall efficiency of your SQL database connections. This approach can also be adapted to any service using Azure SQL Database and Azure Active Directory for authentication. Next steps Benchmarking: Test the implementation in your environment to quantify the performance gains. Error Handling: Extend the retry logic and error handling to better handle transient failures, especially in production environments. Resources: Introducing Configurable Retry Logic in Microsoft.Data.SqlClient v3.0.0-Preview1 Configurable retry logic in SqlClient Troubleshoot transient connection errors Scaling: Consider how this strategy can be applied across multiple services in larger architectures. Consider reading and applying managed identity best practices. Resources: Managed identity best practice recommendationsJoin the Fabric Partner Community for this Week's Fabric Engineering Connection calls!
Are you a Microsoft partner that is interested in data and analytics? Be sure to join us for the next Fabric Engineering Connection calls! 🎉 Miguel Llopis and Mark Kromer will be providing a recap of the Data Factory Announcements made during FabCon Europe, followed by Ambika J. presenting on Data Recovery Features in Fabric DW. The Americas & EMEA call will take place Wednesday, October 1, from 8-9 am PT and the APAC call is Thursday, October 2, from 1-2 am UTC/Wednesday, October 1, from 5-6 pm PT. This is your opportunity to learn more, ask questions, and provide feedback. To join the call, you must be a member of the Fabric Partner Community Teams channel. To join, complete the participation form at https://aka.ms/JoinFabricPartnerCommunity. We look forward to seeing you later this week!43Views1like0CommentsPartner Know Before You Go to the 2025 European Microsoft Fabric Community Conference!
We can’t wait to see you at FabCon Europe 2025, taking place 15-18 September at the Austria Center Vienna in Vienna, Austria! With more than 130 expert-led sessions over three days, plus workshops 15 September, this is the largest Microsoft tech conference in Europe! Our team has been hard at work planning several partner-exclusive activities throughout the event, to help ensure the best experience possible for you, our valued partners. This Know Before You Go guide will provide all the details on how to participate in: Partner Day Partner Happy Hour 1:1 Meetings Partner Booth at Ask the Experts Partner AMA Partner Photo Scavenger Hunt Cvent Event App If you have any questions, please feel free to reach out to our team at mailto:FabricPartnersTeam@microsoft.com or through the Cvent app. See you very soon! Complete details and entry form are available at https://aka.ms/FabConEuropePartnerPhotoHunt.537Views2likes1CommentJoin the Fabric Partner Community for an AMA with Arun Ulag!
🚨 Mark your calendars now! 📅 New this year in the Fabric Partner Community, an AMA (Ask Me Anything) call series with members of the Fabric Leadership Team! 🥳 Arun Ulag, CVP of Azure Data, will kick of this new call series Thursday, September 25, from 8-9 am PT. You will not want to miss this opportunity to ask all your questions, including those related to the announcements made at #FabConEurope, provide your feedback, and more! 👏 To join theses calls, you must be a member of the Fabric Partner Community Teams Channel. Not yet part of the Fabric Partner Community? Join now by submitting the form at https://aka.ms/JoinFabricPartnerCommunity.46Views1like0Comments