sqlserverazurevm
45 TopicsAnnouncement: Upcoming Changes to SQL Server on Linux Virtual Machine (VM) Provisioning in Azure
We’re making an important update to how customers provision SQL Server on Linux virtual machines (VMs) in Azure. What’s Changing? Starting soon, Linux-based SQL Server Virtual Machine (VM) images published by Microsoft will be removed from the Azure Marketplace. As a result, these SQL Server on Linux images will no longer be visible in the Azure SQL hub during VM provisioning, nor accessible via CLI, Azure Portal, or PowerShell scripts. This change is part of our broader effort to simplify and modernise the provisioning experience for SQL Server Linux on Azure. Why Are We Making This Change? We’re transitioning away from image-based provisioning to a script-based model that offers greater flexibility, automation, and control. This fresh approach will allow customers to: Choose their preferred supported Linux distribution (RHEL, SLES or Ubuntu (Pro)) Select SQL Server version and edition Configure licensing options Customise deployment parameters through scripts and ability to add VM extensions. This shift ensures a more consistent and extensible experience across all supported platforms. When Will This Happen? The deprecation of Linux VM images will begin shortly and will be completed over the next couple of months. During this transition, customers may notice the SQL Server on Linux based Azure marketplace image listings may not be available. What Should You Do? For the Azure Virtual Machines deployed using the SQL on Linux Azure marketplace images in the past they'd continue to work, but if you’re planning to deploy new SQL Server on Linux based Azure Virtual Machines, please follow the below steps: Manual installation is recommended during this transition period. Start by creating a Linux Virtual Machine using the Azure Portal, CLI, or PowerShell. Once the VM is provisioned, follow the official SQL Server installation documentation to complete the setup. VM Creation Guidance: You can refer to this guide for step-by-step instructions on creating an Azure Linux-based virtual machine: https://learn.microsoft.com/en-us/azure/virtual-machines/linux/quick-create-portal Choosing a Linux Distribution: Feel free to select the distribution that best fits your requirements. For a list of endorsed Linux distributions on Azure, see: Linux distributions endorsed on Azure - Azure Virtual Machines | Microsoft Learn Please note, SQL Server is officially supported only on the following Linux distributions. Based on the distribution you choose, refer to the corresponding documentation for SQL Server installation guidance: Red Hat Enterprise Linux (RHEL) SUSE Linux Enterprise Server (SLES) Ubuntu For more details on supported distributions refer to: SQL Server 2025 - Supported Linux distributions SQL Server 2022 - Supported Linux distributions A new script-based provisioning experience is coming soon - stay tuned for announcements. We’ll continue to share updates through the Azure portal, documentation, and this blog.187Views2likes0CommentsManaged Identity support for Azure Key Vault in SQL Server running on Linux
We are happy to announce that, you can now use Managed Identity to authenticate to Azure Key Vault from SQL Server running on Azure VM (Linux) available from SQL Server 2022 CU18 onwards. This blog will walk you through the process of using a user-assigned managed identity to access Azure Key Vault and configure Transparent Data Encryption(TDE) for a SQL database. Managed Identity: Microsoft Entra ID, formerly Azure Active Directory, provides an automatically managed identity to authenticate to any Azure service that supports Microsoft Entra authentication, such as Azure Key Vault, without exposing credentials in the code. Refer Managed identities for Azure resources - Managed identities for Azure resources | Microsoft Learn for more details. VM Setup and Prerequisites: Before diving into the setup, it's essential to ensure that your Azure Linux VM has SQL Server installed and that the VM has identities assigned with the necessary key vault permissions. Set up SQL Server running on Azure Linux VM. Refer SQL Server on RHEL VM in Azure: RHEL: Install SQL Server on Linux - SQL Server | Microsoft Learn, SQL Server on SLES VM in Azure: SUSE: Install SQL Server on Linux - SQL Server | Microsoft Learn, SQL Server on Ubuntu VM in Azure: Ubuntu: Install SQL Server on Linux - SQL Server | Microsoft Learn for more details. Create user-assigned Managed Identity. Refer https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal for more details. Go to Azure Linux VM resource in the Azure portal and click on Identity tab under security blade. Go to the User assigned tab in the right side panel and click on Add. Select the user-assigned managed identity and click on Add. Create a Key Vault and Keys. Refer Integrate Key Vault with SQL Server on Windows VMs in Azure (Resource Manager) - SQL Server on Azure VMs | Microsoft Learn for more details. Assign Key Vault Crypto Service Encryption User role to the user-assigned managed identity to perform wrap and unwrap operations. Go to the key vault resource that you created, and select the Access control (IAM)setting. Select Add> Add role assignment. Search for Key Vault Crypto Service Encryption User and select the role. Select Next. In the Members tab, select Managed identity option and click on Select members option, and then search for the user-assigned managed identity that you created in Step 3. Select the managed identity and then click on Select button. Setting the primary identity on Azure Linux VM To set the managed identity as the primary identity for Azure Linux VM, you can use the mssql-conf tool packaged with SQL Server. Here are the steps: Use the mssql-conf tool to manually set the primary identity. Run the following commands: sudo /opt/mssql/bin/mssql-conf set network.aadmsiclientid <client id of the managed identity> sudo /opt/mssql/bin/mssql-conf set network.aadprimarytenant <tenant id> 3. Restart the SQL Server: sudo systemctl restart mssql-server Enable TDE using EKM and managed identity: Refer Managed Identity Support for Extensible Key Management (EKM) with Azure Key Vault (AKV) - SQL Server on Azure VMs | Microsoft Learn for configuration steps for Azure Windows VM. These steps remain same for SQL Server running on an Azure Linux VM. 1.Enable EKM in SQL Server running on the Azure VM. 2.Create credential and encrypt the database. When using the CREATE CREDENTIAL command in this context, you only need to provide the 'Managed Identity' in the IDENTITY argument. Unlike earlier scenarios, you do not need to include a SECRET argument. This simplifies the process and enhances security by not requiring a secret to be passed. Conclusion: Using managed identity to access Azure Key Vault in SQL Server running on an Azure Linux VM boosts security, streamlines key management, and supports compliance. With data protection being paramount, Azure Key Vault’s integration along with managed identity offers a robust solution. Stay tuned for more insights on SQL Server on Linux! Official Documentation: Managed Identity Support for Extensible Key Management (EKM) with Azure Key Vault (AKV) - SQL Server on Azure VMs | Microsoft Learn Extensible Key Management using Azure Key Vault - SQL Server Setup Steps for Extensible Key Management Using the Azure Key Vault Azure Key Vault Integration for SQL Server on Azure VMs352Views3likes0CommentsSmarter Parallelism: Degree of parallelism feedback in SQL Server 2025
🚀 Introduction With SQL Server 2025, we have made Degree of parallelism (DOP) feedback an on by default feature. Originally introduced in SQL Server 2022, DOP feedback is now a core part of the platform’s self-tuning capabilities, helping workloads scale more efficiently without manual tuning. The feature works with database compatibility 160 or higher. ⚙️ What Is DOP feedback? DOP feedback is part of the Intelligent Query Processing (IQP) family of features. It dynamically adjusts the number of threads (DOP) used by a query based on runtime performance metrics like CPU time and elapsed time. If a query that has generated a parallel plan consistently underperforms due to excessive parallelism, the DOP feedback feature will reduce the DOP for future executions without requiring recompilation. Currently, DOP feedback will only recommend reductions to the degree of parallelism setting on a per query plan basis. The Query Store must be enabled for every database where DOP feedback is used, and be in a "read write" state. This feedback loop is: Persistent: Stored in Query Store. Persistence is not currently available for Query Store on readable secondaries. This is subject to change in the near future, and we'll provide an update to it's status after that occurs. Adaptive: Adjusts a query’s DOP, monitors those adjustments, and reverts any changes to a previous DOP if performance regresses. This part of the system relies on Query Store being enabled as it relies on the runtime statistics captured within the Query Store. Scoped: Controlled via the DOP_FEEDBACK database-scoped configuration or at the individual query level with the use of the DISABLE_DOP_FEEDBACK query hint. 🧪 How It Works Initial Execution: SQL Server compiles and executes a query with a default or manually set DOP. Monitoring: Runtime stats are collected and compared across executions. Adjustment: If inefficiencies are detected, DOP is lowered (minimum of 2). Validation: If performance improves and is stable, the new DOP is persisted. If not, the DOP recommendation will be reverted to the previously known good DOP setting, which is typically the original setting that the feature used as a baseline. At the end of the validation period any feedback that has been persisted, regardless of its state (i.e. stabilized, reverted, no recommendation, etc.) can be viewed by querying the sys.query_store_plan_feedback system catalog view: SELECT qspf.feature_desc, qsq.query_id, qsp.plan_id, qspf.plan_feedback_id, qsqt.query_sql_text, qsp.query_plan, qspf.state_desc, qspf.feedback_data, qspf.create_time, qspf.last_updated_time FROM sys.query_store_query AS qsq INNER JOIN sys.query_store_plan AS qsp ON qsp.query_id = qsq.query_id INNER JOIN sys.query_store_query_text AS qsqt ON qsqt.query_text_id = qsq.query_text_id INNER JOIN sys.query_store_plan_feedback AS qspf ON qspf.plan_id = qsp.plan_id WHERE qspf.feature_id = 3; 🆕 What’s New in SQL Server 2025? Enabled by Default: No need to toggle the database scoped configuration on, DOP feedback is active out of the box. Improved Stability: Enhanced validation logic ensures fewer regressions. Better Integration: Works seamlessly with other IQP features like Memory Grant feedback , Cardinality Estimation feedback, and Parameter Sensitive Plan (PSP) optimization. 📊 Visualizing the Feedback Loop 🧩 How can I see if DOP feedback is something that would be beneficial for me? Without setting up an Extended Event session for deeper analysis, looking over some of the data in the Query Store can be useful in determining if DOP feedback would find interesting enough queries for it to engage. At a minimum, if your SQL Server instance is operating with parallelism enabled and has: o a MAXDOP value of 0 (not generally recommended) or a MAXDOP value greater than 2 o you observe multiple queries have execution runtimes of 10 seconds or more along with a degree of parallelism of 4 or greater o and have an execution count 15 or more according to the output from the query below SELECT TOP 20 qsq.query_id, qsrs.plan_id, [replica_type] = CASE WHEN replica_group_id = '1' THEN 'PRIMARY' WHEN replica_group_id = '2' THEN 'SECONDARY' WHEN replica_group_id = '3' THEN 'GEO SECONDARY' WHEN replica_group_id = '4' THEN 'GEO HA SECONDARY' ELSE TRY_CONVERT(NVARCHAR (200), qsrs.replica_group_id) END, AVG(qsrs.avg_dop) as dop, SUM(qsrs.count_executions) as execution_count, AVG(qsrs.avg_duration)/1000000.0 as duration_in_seconds, MIN(qsrs.min_duration)/1000000.0 as min_duration_in_seconds FROM sys.query_store_runtime_stats qsrs INNER JOIN sys.query_store_plan qsp ON qsp.plan_id = qsrs.plan_id INNER JOIN sys.query_store_query qsq ON qsq.query_id = qsp.query_id GROUP BY qsrs.plan_id, qsq.query_id, qsrs.replica_group_id ORDER BY dop desc, execution_count desc; 🧠 Behind the Scenes: How Feedback Is Evaluated DOP feedback uses a rolling window of recent executions (typically 15) to evaluate: Average CPU time Standard deviation of CPU time Adjusted elapsed time* Stability of performance across executions If the adjusted DOP consistently improves efficiency without regressing performance, it is persisted. Otherwise, the system reverts to the last known good configuration (also knows as the default dop to the system). As an example, if the dop for a query started out with a value of 8, and DOP feedback determined that a DOP of 4 was an optimal number; if over the period of the rolling window and while the query is in the validation phase, if the query performance varied more than expected, DOP feedback will undo it's change of 4 and set the query back to having a DOP of 8. 🧠 Note: The adjusted elapsed time intentionally excludes wait statistics that are not relevant to parallelism efficiency. This includes ignoring buffer latch, buffer I/O, and network I/O waits, which are external to parallel query execution. This ensures that feedback decisions are based solely on CPU and execution efficiency, not external factors like I/O or network latency. 🧭 Best Practices Enable Query Store: This is required for DOP feedback to function. Monitor DOP feedback extended events SQL Server provides a set of extended events to help you monitor and troubleshoot the DOP feedback lifecycle. Below is a sample script to create a session that captures key events, followed by a breakdown of what each event means. IF EXISTS (SELECT * FROM sys.server_event_sessions WHERE name = 'dop_xevents') DROP EVENT SESSION [dop_xevents] ON SERVER; GO CREATE EVENT SESSION [dop_xevents] ON SERVER ADD EVENT sqlserver.dop_feedback_analysis_stopped, ADD EVENT sqlserver.dop_feedback_eligible_query, ADD EVENT sqlserver.dop_feedback_provided, ADD EVENT sqlserver.dop_feedback_reassessment_failed, ADD EVENT sqlserver.dop_feedback_reverted, ADD EVENT sqlserver.dop_feedback_stabilized -- ADD EVENT sqlserver.dop_feedback_validation WITH ( MAX_MEMORY = 4096 KB, EVENT_RETENTION_MODE = ALLOW_SINGLE_EVENT_LOSS, MAX_DISPATCH_LATENCY = 30 SECONDS, MAX_EVENT_SIZE = 0 KB, MEMORY_PARTITION_MODE = NONE, TRACK_CAUSALITY = OFF, STARTUP_STATE = OFF ); ⚠️ Note: The extended event that has been commented out (dop_feedback_validation) is part of the debug channel. Enabling it may introduce additional overhead and should be used with caution in production environments. 📋 DOP Feedback Extended Events Reference Event Name Description dop_feedback_eligible_query Fired when a query plan becomes eligible for DOP feedback. Captures initial runtime stats like CPU time and adjusted elapsed time. dop_feedback_analysis_stopped Indicates that SQL Server has stopped analyzing a query for DOP feedback. Reasons include high variance in stats or the optimal DOP has already been achieved. dop_feedback_provided Fired when SQL Server provides a new DOP recommendation for a query. Includes baseline and feedback stats. dop_feedback_reassessment_failed Indicates that a previously persisted feedback DOP was reassessed and found to be invalid, restarting the feedback cycle. dop_feedback_reverted Fired when feedback is rolled back due to performance regression. Includes baseline and feedback stats. dop_feedback_stabilized Indicates that feedback has been validated and stabilized. After stabilization, additional adjustment to the feedback can be made when the system reassesses the feedback on a periodic basis. 🔍 Understanding the feedback_data JSON in DOP feedback In the "How it works" section of this article, we had provided a sample script that showed some of the data that can be persisted within the sys.query_store_plan_feedback catalog view. When DOP feedback stabilizes, SQL Server stores a JSON payload in the feedback_data column of that view, figuring out how to interpret that data can sometimes be challenging. From a structural perspective, the feedback_data field contains a JSON object with two main sections; LastGoodFeedback and BaselineStats. As an example { "LastGoodFeedback": { "dop": "2", "avg_cpu_time_ms": "12401", "avg_adj_elapsed_time_ms": "12056", "std_cpu_time_ms": "380", "std_adj_elapsed_time_ms": "342" }, "BaselineStats": { "dop": "4", "avg_cpu_time_ms": "17843", "avg_adj_elapsed_time_ms": "13468", "std_cpu_time_ms": "333", "std_adj_elapsed_time_ms": "328" } } Section Field Description LastGoodFeedback dop The DOP value that was validated and stabilized for future executions. avg_cpu_time_ms Average CPU time (in milliseconds) for executions using the feedback DOP. avg_adj_elapsed_time_ms Adjusted elapsed time (in milliseconds), excluding irrelevant waits. std_cpu_time_ms Standard deviation of CPU time across executions. std_adj_elapsed_time_ms Standard deviation of adjusted elapsed time. BaselineStats dop The original DOP used before feedback was applied. avg_cpu_time_ms Average CPU time for the baseline executions. avg_adj_elapsed_time_ms Adjusted elapsed time for the baseline executions. std_cpu_time_ms Standard deviation of CPU time for the baseline. std_adj_elapsed_time_ms Standard deviation of adjusted elapsed time for the baseline. One method that can be used to extract this data could be to utilize the JSON_VALUE function: SELECT qspf.plan_id, qs.query_id, qt.query_sql_text, qsp.query_plan_hash, qspf.feature_desc, -- LastGoodFeedback metrics JSON_VALUE(qspf.feedback_data, '$.LastGoodFeedback.dop') AS last_good_dop, JSON_VALUE(qspf.feedback_data, '$.LastGoodFeedback.avg_cpu_time_ms') AS last_good_avg_cpu_time_ms, JSON_VALUE(qspf.feedback_data, '$.LastGoodFeedback.avg_adj_elapsed_time_ms') AS last_good_avg_adj_elapsed_time_ms, JSON_VALUE(qspf.feedback_data, '$.LastGoodFeedback.std_cpu_time_ms') AS last_good_std_cpu_time_ms, JSON_VALUE(qspf.feedback_data, '$.LastGoodFeedback.std_adj_elapsed_time_ms') AS last_good_std_adj_elapsed_time_ms, -- BaselineStats metrics JSON_VALUE(qspf.feedback_data, '$.BaselineStats.dop') AS baseline_dop, JSON_VALUE(qspf.feedback_data, '$.BaselineStats.avg_cpu_time_ms') AS baseline_avg_cpu_time_ms, JSON_VALUE(qspf.feedback_data, '$.BaselineStats.avg_adj_elapsed_time_ms') AS baseline_avg_adj_elapsed_time_ms, JSON_VALUE(qspf.feedback_data, '$.BaselineStats.std_cpu_time_ms') AS baseline_std_cpu_time_ms, JSON_VALUE(qspf.feedback_data, '$.BaselineStats.std_adj_elapsed_time_ms') AS baseline_std_adj_elapsed_time_ms FROM sys.query_store_plan_feedback AS qspf JOIN sys.query_store_plan AS qsp ON qspf.plan_id = qsp.plan_id JOIN sys.query_store_query AS qs ON qsp.query_id = qs.query_id JOIN sys.query_store_query_text AS qt ON qs.query_text_id = qt.query_text_id WHERE qspf.feature_desc = 'DOP Feedback' AND ISJSON(qspf.feedback_data) = 1; 🧪 Why This Matters This JSON structure is critical for: Debugging regressions: You can compare baseline and feedback statistics to understand if a change in DOP helped or hurt a set of queries. Telemetry and tuning: Tools can be used to parse this JSON payload to surface insights in performance dashboards. Transparency: It provides folks that care about the database visibility into how SQL Server is adapting to their workload. 📚 Learn More Intelligent Query Processing: degree of parallelism feedback Degree of parallelism (DOP) feedback Intelligent query processing in SQL databases Microsoft SQL Server1KViews0likes0CommentsWhat’s new in SQL Server 2025 CTP 2.1: Building momentum from public preview
During Microsoft Build, we announced the public preview of SQL Server 2025 (https://aka.ms/sqlserver2025), marking a significant advancement in our efforts to deliver an AI-ready enterprise database platform with superior security, performance, and availability. We are pleased to announce Community Technology Preview (CTP) 2.1. This update builds on the momentum from #MSBuild and brings new features and enhancements designed to help customers unlock more value from their data, simplify operations, and strengthen security. Efficient Vector Data & Indexing Addressed few limitations from the Vector data type and functions for streamlined usage. Improved Vector Index build performance significantly. Transmit vectors efficiently in binary format to reduce payload size and enhance AI workload performance using the updated TDS protocol and updated drivers Added sys.vector_indexes catalog view for querying vector indexes. Creating a vector index no longer locks the table with SCH-M lock, allowing full read-access during indexing. Embedding with Auto-Retry: The new enhancement introduces a built-in mechanism to automatically retry the embedding call if it fails due to temporary HTTP errors (like timeouts or service unavailability). Secure by default: SQL Server 2025 to modernize and secure internal communications across all components. In this release we extended TDS 8.0 and Transport Layer Security (TLS) 1.3 support for SQL Writer, PolyBase service and SQL CEIP our telemetry services. What’s new in SQL Server 2025 security Tempdb enhancements: Tempdb space resource governance now supports percent-based limits. Resource governance can now be defined using percentage-based thresholds of the maximum tempdb space, making it easier to scale policies across different hardware configurations. Immutable Storage for Azure Blob Backups Backups to Azure Blob Storage now support immutable storage, which prevents tampering or deletion for a defined retention period—ideal for compliance and audit scenarios. Max_message_size_kb Parameter Update The sys.sp_create_event_group_stream stored procedure now includes an updated Max_message_size_kb parameter, allowing better control over event stream message sizes. Automatic Plan Correction (APC) Behavioral Change SQL Server now automatically detects plan regressions and applies FORCE_LAST_GOOD_PLAN to correct them. The regression detection model previously enabled by Trace Flag 12618 is now on by default, making automatic tuning more proactive and effective without manual intervention. SQL Server Enabled by Azure Arc – Overview This release introduces native support for Azure Arc integration in SQL Server 2025 Preview. Azure Arc is a Microsoft service that allows you to manage on-premises, multi-cloud, and edge environments through the Azure control plane. Consolidation of reporting services: Beginning with SQL Server 2025, Microsoft will integrate all on-premises reporting services into Power BI Report Server (PBIRS). There will be no further releases of SQL Server Reporting Services (SSRS). PBIRS will serve as the default on-premises reporting solution for SQL Server. For more information, see Reporting Services consolidation FAQ Discontinued services: Purview access policies (DevOps policies and data owner policies) are discontinued in this version of SQL Server. As an alternative to the policy actions provided by Purview policies, please use Fixed server roles. Refer our documentation for details on the specific server roles to use. Get started SQL Server 2025 is a major upgrade that unites databases and AI across on-premises and cloud. It supports existing apps and T-SQL with minimal changes, enabling organizations to scale, integrate with modern data platforms, and unlock new insights—while building on SQL Server’s trusted foundation. Ready to try it out? Get started today: aka.ms/getsqlserver2025. Learn more Microsoft Build 2025: SQL Server 2025: The Database Developer Reimagined Docs: aka.ms/Build/sql2025docs Announcement blog: aka.ms/sqlserver2025 SQL Server 2025 deep dive SQL Server tech community blog SQL Server homepage: https://www.microsoft.com/en-us/sql-server MSSQL Extension for Visual Studio Code with GitHub Copilot: https://aka.ms/vscode-mssql-copilot1.6KViews1like1CommentEnabling Azure Key Vault for SQL Server on Linux
Enhancing Security with EKM using Azure Key Vault in SQL Server on Linux: We’re excited to announce that Extensible Key Management (EKM) using Azure Key Vault in SQL Server on Linux is now generally available from SQL Server 2022 CU12 onwards, which allows you to manage encryption keys outside of SQL Server using Azure Key Vaults. In this blog post, we’ll explore how to leverage Azure Key Vault as an EKM provider for SQL Server on Linux. Azure Key Vault: The Bridge to Enhanced Security is a cloud-based service that securely stores keys, secrets, and certificates. By integrating Azure Key Vault with SQL Server, you can benefit from its scalability, high performance, and high availability. Refer Set up Transparent Data Encryption (TDE) Extensible Key Management with Azure Key Vault - SQL Server | Microsoft Learn for more details. Setting Up EKM with Azure Key Vault Here’s a streamlined version of the setup process for EKM with Azure Key Vault on SQL Server for Linux: Initialize a Microsoft Entra service principal. Establish an Azure Key Vault. Set up SQL Server for EKM and register the SQL Server Connector. Finalize SQL Server configuration. The full guide for setting up AKV with SQL Server on Linux is available here Set up Transparent Data Encryption (TDE) Extensible Key Management with Azure Key Vault - SQL Server | Microsoft Learn . For SQL on Linux, omit steps 3 and 4 and proceed directly to step 5. I’ve included screenshots below for your quick reference that covers the SQL Server configuration to use AKV. Run the below commands to enable EKM in SQL Server and register the SQL Server Connector as EKM provider. Please note: SQL Server requires manual rotation of the TDE certificate or asymmetric key, as it doesn’t rotate them automatically. Regular key rotation is essential for maintaining security and effective key management. Conclusion Using Azure Key Vault for EKM with SQL Server on Linux boosts security, streamlines key management, and supports compliance. With data protection being paramount, Azure Key Vault’s integration offers a robust solution. Stay tuned for more insights on SQL Server on Linux! :old_key:️:locked: Official Documentation: Extensible Key Management using Azure Key Vault - SQL Server Setup Steps for Extensible Key Management Using the Azure Key Vault Azure Key Vault Integration for SQL Server on Azure VMs4KViews1like1CommentUpcoming changes for SQL Server Management Studio (SSMS) - Part 2
This is the second post in a series of three about SQL Server Management Studio, and upcoming changes to the SSMS 20 connection dialog. This post also announces the SSMS 20 Preview 1 build, which is available to download.18KViews7likes7CommentsSQL Server on Azure VMs: I/O analysis (preview)
Analyzing I/O problems just got easier for SQL Server on Azure VMs It is not easy to understand what's going on when you run into an I/O related performance problem on an Azure Virtual Machine. It is a common, but complex problem. What you need is to figure out what's happening at both the host level and your SQL Server instance where often, correlating host metrics with SQL Server workloads can be a challenge. We developed a new experience that helps you do exactly that.3.4KViews3likes0Comments