sql server
414 TopicsStream data in near real time from SQL to Azure Event Hubs - Public preview
If near-real time integration is something you are looking to implement and you were looking for a simpler way to get the data out of SQL, keep reading. SQL is making it easier to integrate and Change Event Streaming is a feature continuing this trend. Modern applications and analytics platforms increasingly rely on event-driven architectures and real-time data pipelines. As the businesses speed up, real time decisioning is becoming especially important. Traditionally, capturing changes from a relational database requires complex ETL jobs, periodic polling, or third-party tools. These approaches often consume significant cycles of the data source, introduce operational overhead, and pose challenges with scalability, especially if you need one data source to feed into multiple destinations. In this context, we are happy to release Change Event Streaming ("CES") feature into Public Preview for Azure SQL Database. This feature enables you to stream row-level changes - inserts, updates, and deletes - from your database directly to Azure Event Hubs in near real time. Change Event Streaming addresses the above challenges by: Reducing latency: Changes are streamed (pushed by SQL) as they happen. This is in contrast with traditional CDC (change data capture) or CT (change tracking) based approaches, where an external component needs to poll SQL at regular intervals. Traditional approaches allow you to increase polling frequency, but it gets difficult to find a sweet spot between minimal latency and minimal overhead due to too frequent polls. Simplifying architecture: No need for Change Data Capture (CDC), Change Tracking, custom polling or external connectors - SQL streams directly to configured destination. This means simpler security profile (fewer authentication points), fewer failure points, easier monitoring, lower skill bar to deploy and run the service. No need to worry about cleanup jobs, etc. SQL keeps track of which changes are successfully received by the destination, handles the retry logic and releases log truncation point. Finally, with CES you have fewer components to procure and get approved for production use. Decoupling: The integration is done on the database level. This eliminates the problem of dual writes - the changes are streamed at transaction boundaries, once your source of truth (the database) has saved the changes. You do not need to modify your app workloads to get the data streamed - you tap right onto the data layer - this is useful if your apps are dated and do not possess real-time integration capabilities. In case of some 3rd party apps, you may not even have an option to do anything other than database level integration, and CES makes it simpler. Also, the publishing database does not concern itself with the final destination for the data - Stream the data once to the common message bus, and you can consume it by multiple downstream systems, irrespective of their number or capacity - the (number of) consumers does not affect publishing load on the SQL side. Serving consumers is handled by the message bus, Azure Event Hubs, which is purpose built for high throughput data transfers. onceptually visualizing data flow from SQL Server, with an arrow towards Azure Event Hubs, from where a number of arrows point to different final destinations. Key Scenarios for CES Event-driven microservices: They need to exchange data, typically thru a common message bus. With CES, you can have automated data publishing from each of the microservices. This allows you to trigger business processes immediately when data changes. Real-time analytics: Stream operational data into platforms like Fabric Real Time Intelligence or Azure Stream Analytics for quick insights. Breaking down the monoliths: Typical monolithic systems with complex schemas, sitting on top of a single database can be broken down one piece at a time: create a new component (typically a microservice), set up the streaming from the relevant tables on the monolith database and tap into the stream by the new components. You can then test run the components, validate the results against the original monolith, and cutover when you build the confidence that the new component is stable. Cache and search index updates: Keep distributed caches and search indexes in sync without custom triggers. Data lake ingestion: Capture changes continuously into storage for incremental processing. Data availability: This is not a scenario per se, but the amount of data you can tap into for business process mining or intelligence in general goes up whenever you plug another database into the message bus. E.g. You plug in your eCommerce system to the message bus to integrate with Shipping providers, and consequently, the same data stream is immediately available for any other systems to tap into. How It Works CES uses transaction log-based capture to stream changes with minimal impact on your workload. Events are published in a structured JSON format following the CloudEvents standard, including operation type, primary key, and before/after values. You can configure CES to target Azure Event Hubs via AMQP or Kafka protocols. For details on configuration, message format, and FAQs, see the official documentation: Feature Overview CES: Frequently Asked Questions Get Started Public preview CES is available today in public preview for Azure SQL Database and as a preview feature in SQL Server 2025. [update 20-mar-2026] Change Event Streaming is now in public preview for Azure SQL Managed instance. Read more here. Private preview CES is also available as a private preview for Azure SQL Managed Instance and Fabric SQL database: you can request to join the private preview by signing up here: https://aka.ms/sql-ces-signup We encourage you to try the feature out and start building real-time integrations on top of your existing data. We welcome your feedback—please share your experience through Azure Feedback portal or support channels. The comments below on this blog post will also be monitored, if you want to engage with us. Finally, CES team can be reached via email: sqlcesfeedback [at] microsoft [dot] com. Useful resources Free Azure SQL Database. Free Azure SQL Managed Instance.1.2KViews0likes0CommentsHow does GitHub Copilot in SSMS 22 handle database context collection before generating a response?
Hello, I am trying to better understand the internal workflow of GitHub Copilot in SSMS 22, especially for database-specific questions. From the product descriptions, it seems that Copilot can use the context of the currently connected database, such as schema, tables, columns, and possibly other metadata, when answering questions or generating T-SQL. However, I could not find clear official documentation about the actual sequence of operations. My main questions are: Before generating a response, does Copilot first collect database context/metadata from the active connection and then send that context to the LLM as grounding information? Or does it first use the LLM to interpret the user’s request, decide what information is needed, and then retrieve database metadata before generating the final answer? In some explanations, I have seen the phrase "Core SQL Copilot Infrastructure", but I cannot find any official documentation for that term. Is this an official component name? If so, what does it specifically refer to in the SSMS Copilot architecture? When Copilot answers schema-related or data-related questions, what information is retrieved automatically from the connected database, and is any SQL executed as part of that process? Is there any official architectural documentation that explains: context collection, prompt grounding, LLM invocation order, and whether query execution can occur before the final response is generated? I am asking because I want to understand the feature from both an architecture and data governance/security perspective. Any clarification from the product team or documentation links would be greatly appreciated. Thank you.22Views0likes0CommentsExpanding Azure Arc SQL Migration with a New Target: SQL Server on Azure Virtual Machines
Modernizing a SQL Server estate is rarely a single-step effort. It typically involves multiple phases, from discovery and assessment to migration and optimization, often spanning on-premises, hybrid, and cloud environments. SQL Server enabled by Azure Arc simplifies this process by bringing all migration steps into a single, cohesive experience in the Azure portal. With the March 2026 release, this integrated experience is extended by adding SQL Server on Azure Virtual Machines as a new migration target in Azure Arc. Arc-enabled SQL Server instances can now be migrated not only to Azure SQL Managed Instance, but also to SQL Server running on Azure infrastructure, using the same unified workflow. Expanding Choice Without Adding Complexity By introducing SQL Server on Azure Virtual Machines as a migration target, Azure Arc now supports a broader range of migration strategies while preserving a single operational model. It becomes possible to choose between Azure SQL Managed Instance and SQL Server on Azure VMs without fragmenting migration tooling or processes. The result is a flexible, scalable, and consistent migration experience that supports hybrid environments, reduces operational overhead, and enables modernization at a controlled and predictable pace. One Integrated Migration Journey A core value of SQL Server migration in Azure Arc is that the entire migration lifecycle is managed from one place. Once a SQL Server instance is enabled by Azure Arc, readiness can be assessed, a migration target selected, a migration method chosen, progress monitored, and cutover completed directly in the Azure portal. This approach removes the need for disconnected tools or custom orchestration. The only prerequisite remains unchanged: the source SQL Server needs to be enabled by Azure Arc. From there, migration is fully integrated into the Azure Arc SQL experience. A Consistent Experience Across Migration Targets The migration experience for SQL Server on Azure Virtual Machines follows the same model already available for Azure SQL Managed Instance migrations in Azure Arc. The same guided workflow, migration dashboard, and monitoring capabilities are used regardless of the selected target. This consistency is intentional. It allows teams to choose the destination that best fits their technical, operational, or regulatory requirements without having to learn a new migration process. Whether migrating to a fully managed PaaS service or to SQL Server on Azure infrastructure, the experience remains predictable and familiar. Backup Log Shipping Migration to SQL Server in Azure VM Migration to SQL Server on Azure Virtual Machines is based on backup and restore, specifically using log shipping mechanism. This is a well-established approach for online migrations that minimizes downtime while maintaining control over the cutover window. In this model, database backups need to be uploaded from the source SQL Server to Azure Blob Storage. The migration engine will restore the initial full backup followed by ongoing transaction log and diff. backups. Azure Blob Storage acts as the intermediary staging location between the source and the target. The Azure Blob Storage account and the target SQL Server running on an Azure Virtual Machine must be co-located in the same Azure region. This regional alignment is required to ensure efficient data transfer, reliable restore operations, and predictable migration performance. Within the Azure Arc migration experience, a simple and guided UX is used to select the Azure Blob Storage container that holds the backup files. Both the selected storage account and the Azure VM hosting SQL Server must reside in the same Azure region. Once the migration job is started, Azure Arc automatically restores the backup files to SQL Server on the Azure VM. As new log backups are uploaded to Blob Storage, they are continuously detected and applied to the target database, keeping it closely synchronized with the source. Controlled Cutover on Your Terms This automated restore process continues until the final cutover is initiated. When the cutover command is issued, Azure Arc applies the final backup to the target SQL Server on the Azure Virtual Machine and completes the migration. The target database is then brought online, and applications can be redirected to the new environment. This controlled cutover model allows downtime to be planned precisely, rather than being dictated by long-running restore operations. Getting started To get started, Arc enable you SQL Server. Then, in the Azure portal, navigate to your Arc enabled SQL Server and select Database migration under the Migration menu on the left. For more information, see the SQL Server migration in Azure Arc documentation.383Views4likes0CommentsUnable to install SQL Server 2022 Express (installer glitch + SSMS error)
Hi, I recently purchased a new Lenovo laptop, and I am trying to install Microsoft SQL Server 2022 Express along with SSMS. SSMS installed successfully, but SQL Server installation fails, and sometimes the installer UI glitches or does not load properly. Because of this, I am getting connection errors in SSMS like "server not found" and "error 40". I am not very familiar with technical troubleshooting. Can someone guide me step-by-step in a simple way to install SQL Server correctly? Thank you.57Views0likes0CommentsZero Trust for data: Make Microsoft Entra authentication for SQL your policy baseline
A policy-driven path from enabled to enforced. Why this matters now Security and compliance programs were once built on an assumption that internal networks were inherently safer. Cloud adoption, remote work, and supply-chain compromise have steadily invalidated that model. U.S. federal guidance has now formalized this shift: Executive Order 14028 calls for modernizing cybersecurity and accelerating Zero Trust adoption, and OMB Memorandum M-22-09 sets a federal Zero Trust strategy with specific objectives and timelines. Meanwhile, attacker economics are changing. Automation and AI make reconnaissance, phishing, and credential abuse cheaper and faster. That concentrates risk on identity—the control plane that sits in front of systems, applications, and data. In Zero Trust, the question is no longer “is the network trusted,” but “is this request verified, governed by policy, and least-privilege?” Why database authentication is a first‑order Zero Trust control Databases are universally treated as crown-jewel infrastructure. Yet many data estates still rely on legacy patterns: password-based SQL authentication, long-lived secrets embedded in apps, and shared administrative accounts that persist because migration feels risky. This is exactly the kind of implicit trust Zero Trust architectures aim to remove. NIST SP 800-207 defines Zero Trust as eliminating implicit trust based solely on network location or ownership and focusing controls on protecting resources. In that model, every new database connection is not “plumbing”—it is an access decision to sensitive data. If the authentication mechanism sits outside the enterprise identity plane, governance becomes fragmented and policy enforcement becomes inconsistent. What changes when SQL uses Microsoft Entra authentication Microsoft Entra authentication enables users and applications to connect to SQL using enterprise identities, instead of usernames and passwords. Across Azure SQL and SQL Server enabled by Azure Arc, Entra-based authentication helps align database access with the same identity controls organizations use elsewhere. The security and compliance outcomes that leaders care about Reduce password and secret risk: move away from static passwords and embedded credentials. Centralize governance: bring database access under the same identity policies, access reviews, and lifecycle controls used across the enterprise. Improve auditability: tie access to enterprise identities and create a consistent control surface for reporting. Enable policy enforcement at scale: move from “configured” controls to “enforced” controls through governance and tooling. This is why Entra authentication is a high-ROI modernization step: it collapses multiple security and operational objectives into one effort (identity modernization) rather than a set of ongoing compensating programs (password rotation programs, bespoke exceptions, and perpetual secret hygiene projects). Why AI makes this a high priority decision AI accelerates both reconnaissance and credential abuse, which concentrates risk on identity. As a result, policy makers increasingly treat phishing-resistant authentication and centralized identity enforcement as foundational—not optional. A practical path: from enabled to enforced Successful security programs define a clear end state, a measurable glide path, and an enforcement model. A pragmatic approach to modernizing SQL access typically includes: Discover active usage: Identify which logins and users are actively connecting and which are no longer required. Establish Entra as the identity authority: Enable Entra authentication on SQL logical servers, starting in mixed mode to reduce disruption. Recreate principals using Entra identities: Replace SQL Authentication logins/users with Entra users, groups, service principals, and managed identities. Modernize application connectivity: Update drivers and connection patterns to use Entra-based authentication and managed identities. Validate, then enforce: Confirm the absence of password‑based SQL authentication traffic, then move to Entra‑only where available and enforce via policy. By adopting this sequencing, organizations can mitigate risks at an early stage and postpone enforcement until the validation process concludes. For a comprehensive migration strategy, refer to Securing Azure SQL Database with Microsoft Entra Password-less Authentication: Migration Guide. Choosing which projects to fund — and which ones to stop When making investment decisions, priority is given to database identity projects that can demonstrate clear risk reduction and lasting security benefits. Microsoft Entra authentication as the default for new SQL workloads, with a defined migration path for the existing workloads. Managed identities for application-to-database connectivity to eliminate stored secrets. Centralized governance for privileged database access using enterprise identity controls. At the same time, organizations should explicitly de-prioritize investments that perpetuate password risk: password rotation projects that preserve SQL Authentication, bespoke scripts maintaining shared logins, and exception processes that do not scale. Security and scale are not competing goals Security is often seen as something that slows down innovation, but database identity offers unique benefits. When enterprise identity is used for access controls, bringing in new applications and users shifts from handing out credentials to overseeing policies. Compliance reporting also becomes uniform rather than customized, making it easier to grow consistently thanks to a single control framework. Modern database authentication is not solely about mitigating risk— it establishes a scalable operational framework for secure data access. A scorecard designed for leadership readiness To elevate the conversation from implementation to governance, use outcome-based metrics: Coverage: Percentage of SQL workloads with Entra authentication enabled. Enforcement: Percentage operating in Entra-only mode after validation. Secret reduction: Applications still relying on stored database passwords. Privilege hygiene: Admin access governed through enterprise identity controls. Audit evidence: Ability to produce identity-backed access reports on demand. These map directly to Zero Trust maturity expectations and provide a defensible definition of “done.” Closing Zero Trust is an operating posture, not a single control. For most organizations, the fastest way to make that posture measurable is to standardize database access on the same identity plane used everywhere else. If you are looking for a single investment that improves security, reduces audit friction, and supports responsible AI adoption, modernizing SQL access with Microsoft Entra authentication — and driving it from enabled to enforced — is one of the most durable choices you can make. References US Government sets forth Zero Trust architecture strategy and requirements (Microsoft Security Blog) Securing Azure SQL Database with Microsoft Entra Password-less Authentication: Migration Guide (Microsoft Tech Community) OMB Memorandum M-22-09: Federal Zero Trust Strategy (White House) NIST SP 800-207: Zero Trust Architecture CISA: Zero Trust Enforce Microsoft Entra-only authentication for Azure SQL Database and Azure SQL Managed Instance341Views1like0CommentsWhy Developers and DBAs love SQL’s Dynamic Data Masking (Series-Part 1)
Dynamic Data Masking (DDM) is one of those SQL features (available in SQL Server, Azure SQL DB, Azure SQL MI, SQL Database in Microsoft Fabric) that both developers and DBAs can rally behind. Why? Because it delivers a simple, built-in way to protect sensitive data—like phone numbers, emails, or IDs—without rewriting application logic or duplicating security rules across layers. With just a single line of T-SQL, you can configure masking directly at the column level, ensuring that non-privileged users see only obfuscated values while privileged users retain full access. This not only streamlines development but also supports compliance with data privacy regulations like GDPR and HIPAA, etc. by minimizing exposure to personally identifiable information (PII). In this first post of our DDM series, we’ll walk through a real-world scenario using the default masking function to show how easy it is to implement and how much development effort it can save. Scenario: Hiding customer phone numbers from support queries Imagine you have a support application where agents can look up customer profiles. They need to know if a phone number exists for the customer but shouldn’t see the actual digits for privacy. In a traditional approach, a developer might implement custom logic in the app (or a SQL view) to replace phone numbers with placeholders like “XXXX” for non-privileged users. This adds complexity and duplicate logic across the app. With DDM’s default masking, the database can handle this automatically. By applying a mask to the phone number column, any query by a non-privileged user will return a generic masked value (e.g. “XXXX”) instead of the real number. The support agent gets the information they need (that a number is on file) without revealing the actual phone number, and the developer writes zero masking code in the app. This not only simplifies the application codebase but also ensures consistent data protection across all query access paths. As Microsoft’s documentation puts it, DDM lets you control how much sensitive data to reveal “with minimal effect on the application layer” – exactly what our scenario achieves. Using the ‘Default’ Mask in T-SQL : The ‘Default’ masking function is the simplest mask: it fully replaces the actual value with a fixed default based on data type. For text data, that default is XXXX. Let’s apply this to our phone number example. The following T-SQL snippet works in Azure SQL Database, Azure SQL MI and SQL Server: SQL -- Step 1: Create the table with a default mask on the Phone column CREATE TABLE SupportCustomers ( CustomerID INT PRIMARY KEY, Name NVARCHAR(100), Phone NVARCHAR(15) MASKED WITH (FUNCTION = 'default()') -- Apply default masking ); GO -- Step 2: Insert sample data INSERT INTO SupportCustomers (CustomerID, Name, Phone) VALUES (1, 'Alice Johnson', '222-555-1234'); GO -- Step 3: Create a non-privileged user (no login for simplicity) CREATE USER SupportAgent WITHOUT LOGIN; GO -- Step 4: Grant SELECT permission on the table to the user GRANT SELECT ON SupportCustomers TO SupportAgent; GO -- Step 5: Execute a SELECT as the non-privileged user EXECUTE AS USER = 'SupportAgent'; SELECT Name, Phone FROM SupportCustomers WHERE CustomerID = 1 Alternatively, you can use Azure Portal to configure masking as shown in the following screenshot: Expected result: The query above would return Alice’s name and a masked phone number. Instead of seeing 222-555-1234, the Phone column would show XXXX. Alice’s actual number remains safely stored in the database, but it’s dynamically obscured for the support agent’s query. Meanwhile, privileged users such as administrator or db_owner which has CONTROL permission on the database or user with proper UNMASK permission would see the real phone number when running the same query. How this helps Developers : By pushing the masking logic down to the database, developers and DBAs avoid writing repetitive masking code in every app or report that touches this data. In our scenario, without DDM you might implement a check in the application like: If user_role == “Support”, then show “XXXX” for phone number, else show full phone. With DDM, such conditional code isn’t needed – the database takes care of it. This means: Less application code to write and maintain for masking Consistent masking everywhere (whether data is accessed via app, report, or ad-hoc query). Quick changes to masking rules in one place if requirements change, without hunting through application code. From a security standpoint, DDM reduces the risk of accidental data exposure and helps in compliance scenarios where personal data must be protected in lower environments or by certain roles, while reducing the developer effort drastically. In the next posts of this series, we’ll explore other masking functions (like Email, Partial, and Random etc) with different scenarios. By the end, you’ll see how each built-in mask can be applied to make data security and compliance more developer-friendly! Reference Links : Dynamic Data Masking - SQL Server | Microsoft Learn Dynamic Data Masking - Azure SQL Database & Azure SQL Managed Instance & Azure Synapse Analytics | Microsoft Learn355Views1like0CommentsManaged Identity on SQL Server On-Prem: The End of Stored Secrets
The Problem with Credentials in SQL Server For an On-Premises SQL Server to access Azure services, you traditionally need to store secrets: Common Scenarios Requiring Credentials Scenario Required Credential Backup to URL (Azure Blob) Storage account key or SAS token Extensible Key Management (Azure Key Vault) Service principal + secret Calling Azure OpenAI from T-SQL API key PolyBase to Azure Data Lake Service principal or key Associated Risks Manual Rotation Secrets expire. You need to plan and execute rotation and not forget to update all references. Secure Storage Where to store these secrets? In SQL Server via CREATE CREDENTIAL? In a config file? Each option has its risks. Attack Surface A compromised secret gives access to associated Azure resources. The more secrets you have, the larger the attack surface. Complex Auditing Who has access to these secrets? When were they used? Tracking is difficult. The Solution: Azure Arc + Managed Identity SQL Server 2025 connected to Azure Arc can geta Managed Identity : This identity: Is managed by Microsoft Entra ID Has no secret to store or rotate Can receive RBAC permissions on Azure resources Is centrally audited in Entra ID How It Works SQL Server 2025 On-Prem Azure Arc Agent installed on the server Managed Identity (automatically created in Entra ID) RBAC assignment on Azure resources -free access to Blob Storage, Key Vault, etc Step-by-Step Configuration Step 1: Enable Azure Arc on the Server and/or Register SQL Server in Azure Arc Follow the procedure describes in this article to onboard your server in Azure Arc. Connect Your SQL Server to Azure Arc Remember that you can also evaluate Azure Arc on a Azure VM (test use only) How to evaluate Azure Arc-enabled servers with an Azure virtual machine Step 2: Retrieve the Managed Identity The Managed Identity can be enabled and retrieved from Azure Arc | SQL Servers > “SQL Server instance” > Settings > Microsoft Entra ID Note: The Managed Identity is server-wide (not at the instance level) Step 3: Assign RBAC Roles Granting access to a Storage Account for backups $sqlServerId = (az resource show --resource-group "MyRG" --name "ServerName" --resource-type "Microsoft.HybridCompute/machines" --query identity.principalId -o tsv) az role assignment create --role "Storage Blob Data Contributor" ` --assignee-object-id $sqlServerId ` --scope "/subscriptions/xxx/resourceGroups/MyRG/providers/Microsoft.Storage/storageAccounts/mybackupaccount" Ex: Backup to URL Without Credential Before (with SAS token) -- Create a credential with a SAS token (expires, must be rotated) CREATE CREDENTIAL [https://mybackup.blob.core.windows.net/backups] WITH IDENTITY = 'SHARED ACCESS SIGNATURE', SECRET = 'sv=2022-11-02&ss=b&srt=sco&sp=rwdlacup...' BACKUP DATABASE [MyDB] TO URL = 'https://mybackup.blob.core.windows.net/backups/MyDB.bak' WITH COMPRESSION After (with Managed Identity --No secret anymore CREATE CREDENTIAL [https://mybackup.blob.core.windows.net/backups] WITH IDENTITY = 'Managed Identity' BACKUP DATABASE [MyDB] TO URL = 'https://mybackup.blob.core.windows.net/backups/MyDB.bak' WITH COMPRESSION Extensible Key Management with Key Vault EKM Configuration with Managed Identity CREATE CREDENTIAL [MyAKV.vault.azure.net] WITH IDENTITY = 'Managed Identity' FOR CRYPTOGRAPHIC PROVIDER AzureKeyVault_EKM_Prov; How Copilot Can Help Infrastructure Configuration Walk me through setting up Azure Arc for SQL Server 2025 to use Managed Identity for backups to Azure Blob Storage @mssql Generate the PowerShell commands to register my SQL Server with Azure Arc and configure RBAC for Key Vault access Identify Existing Credentials to Migrate List all credentials in my SQL Server that use SHARED ACCESS SIGNATURE or contain secrets, so I can plan migration to Managed Identity Migration Scripts I have backup jobs using SAS token credentials. Generate a migration script to convert them to use Managed Identity Troubleshooting My backup WITH MANAGED_IDENTITY fails with "Authorization failed". What are the steps to diagnose RBAC permission issues? @mssql The Azure Arc agent shows "Disconnected" status. How do I troubleshoot connectivity and re-register the server? Audit and Compliance Generate a report showing all Azure resources my SQL Server's Managed Identity has access to, with their RBAC role assignments Prerequisites and Limitations Prerequisites Azure Arc agent installed and connected SQL Server 2025, running on Windows Azure Extension for SQL Server. Current Limitations Failover cluster instances isn't supported. Disabling not recommended Only system-assigned managed identities are supported FIDO2 method not currently supported Azure public cloud access required Documentation Overview Managed identity overview Set Up Managed Identity and Microsoft Entra Authentication for SQL Server Enabled by Azure Arc Set up Transparent Data Encryption (TDE) Extensible Key Management with Azure Key VaultImproving Azure SQL Database reliability with accelerated database recovery in tempdb
We are pleased to announce that in Azure SQL Database, accelerated database recovery is now enabled in the tempdb database to bring instant transaction rollback and aggressive log truncation for transactions in tempdb. The same improvement is coming to SQL Server and Azure SQL Managed Instance.718Views1like2CommentsAzure Data Studio Retirement
We’re announcing the upcoming retirement of Azure Data Studio (ADS) on February 6, 2025, as we focus on delivering a modern, streamlined SQL development experience. ADS will remain supported until February 28, 2026, giving developers ample time to transition. This decision aligns with our commitment to simplifying SQL development by consolidating efforts on Visual Studio Code (VS Code) with the MSSQL extension, a powerful and versatile tool designed for modern developers. Why Retire Azure Data Studio? Azure Data Studio has been an essential tool for SQL developers, but evolving developer needs and the rise of more versatile platforms like VS Code have made it the right time to transition. Here’s why: Focus on innovation VS Code, widely adopted across the developer community, provides a robust platform for delivering advanced features like cutting-edge schema management and improved query execution. Streamlined tools Consolidating SQL development on VS Code eliminates duplication, reduces engineering maintenance overhead, and accelerates feature delivery, ensuring developers have access to the latest innovations. Why Transition to Visual Studio Code? VS Code is the #1 developer tool, trusted by millions worldwide. It is a modern, versatile platform that meets the evolving demands of SQL and application developers. By transitioning, you gain access to cutting-edge tools, seamless workflows, and an expansive ecosystem designed to enhance productivity and innovation. We’re committed to meeting developers where they are, providing a modern SQL development experience within VS Code. Here’s how: Modern development environment VS Code is a lightweight, extensible, and community-supported code editor trusted by millions of developers. It provides: Regular updates. An active extension marketplace. A seamless cross-platform experience for Windows, macOS, and Linux. Comprehensive SQL features With the MSSQL extension in VS Code, you can: Execute queries faster with filtering, sorting, and export options for JSON, Excel, and CSV. Manage schemas visually with Table Designer, Object Explorer, and support for keys, indexes, and constraints. Connect to SQL Server, Azure SQL (all offerings), and SQL database in Fabric using an improved Connection Dialog. Streamline development with scripting, object modifications, and a unified SQL experience. Optimize performance with an enhanced Query Results Pane and execution plans. Integrate with DevOps and CI/CD pipelines using SQL Database Projects. Stay tuned for upcoming features—we’re continuously building new experiences based on feedback from the community. Make sure to follow the MSSQL repository on GitHub to stay updated and contribute to the project! Streamlined workflow VS Code supports cloud-native development, real-time collaboration, and thousands of extensions to enhance your workflows. Transitioning to Visual Studio Code: What You Need to Know We understand that transitioning tools can raise concerns, but moving from Azure Data Studio (ADS) to Visual Studio Code (VS Code) with the MSSQL extension is designed to be straightforward and hassle-free. Here’s why you can feel confident about this transition: No Loss of Functionality If you use ADS to connect to Azure SQL databases, SQL Server, or SQL database in Fabric, you’ll find that the MSSQL extension supports these scenarios seamlessly. Your database projects, queries, and scripts created in ADS are fully compatible with VS Code and can be opened without additional migration steps. Familiar features, enhanced experience VS Code provides advanced tools like improved query execution, modern schema management, and CI/CD integration. Additionally, alternative tools and extensions are available to replace ADS capabilities like SQL Server Agent and Schema Compare. Cross-Platform and extensible Like ADS, VS Code runs on Windows, macOS, and Linux, ensuring a consistent experience across operating systems. Its extensibility allows you to adapt it to your workflow with thousands of extensions. If you have further questions or need detailed guidance, visit the ADS Retirement page. The page includes step-by-step instructions, recommended alternatives, and additional resources. Continued Support With the Azure Data Studio retirement, we’re committed to supporting you during this transition: Documentation: Find detailed guides, tutorials, and FAQs on the ADS Retirement page. Community Support: Engage with the active Visual Studio Code community for tips and solutions. You can also explore forums like Stack Overflow. GitHub Issues: If you encounter any issues, submit a request or report bugs on the MSSQL extension’s GitHub repository. Microsoft Support: For critical issues, reach out to Microsoft Support directly through your account. Transitioning to VS Code opens the door to a more modern and versatile SQL development experience. We encourage you to explore the new possibilities and start your journey today! Conclusion Azure Data Studio has served the SQL community well,but the Azure Data Studio retirement marks an opportunity to embrace the modern capabilities of Visual Studio Code. Transitioning now ensures you’re equipped with cutting-edge tools and a future-ready platform to enhance your SQL development experience. For a detailed guide on ADS retirement , visit aka.ms/ads-retirement. To get started with the MSSQL extension, check out the official documentation. We’re excited to see what you build with VS Code!34KViews4likes28CommentsSQL Server has encountered I/O requests taking longer than 15 seconds
Most of us have seen these errors in the SQL Server error log pointing to a IO subsystem problem: 2015/06/15 12:43:01 spid8s SQL Server has encountered 1016 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [N:\ABCDATA8\ [ABCDATA8.ndf] in database [ABC] (5). The OS file handle is 0x000000000000172C. The offset of the latest long I/O is: 0x0000073234 For a customer case this message has to be analyzed in more detail, because the storage group at the customer insists to get more information as they don’t see these high numbers on the storage system at that time. A deeper investigation reveals these findings: the counter is collected and reported for each file separately the message is printed by the checkpoint, the background process that kicks in every 5 seconds it is printed in a minimum interval of 300 seconds (5 minutes) per file, means only one message per 5 minutes per file it checks the amount of ticks (milliseconds) IO requests were active and if this exceeds 15000 (15 seconds) this IO request is counted it is doing this for all parallel and asynchronously IO requests (per file) The problem is that these are not the amount of IO requests that are counted, but the amount of times IO request were seen during the reporting time. Imagine you have a very long IO request going on (only one). The reporting kicks in after 5 seconds (by the checkpoint) and after 3 times (15 seconds) the IO request is the first time counted, but no message is printed yet as the five-minute minimum per file is not reached yet. The very same IO request is counted at each reporting cycle (every 5 seconds) until the 300 seconds threshold is reached and the message is printed (and the counter is reset). Until then this one IO request is counted 57 times (300 seconds / 5 seconds reporting interval = 60 occurrences – 3 time free (first 15 seconds)). So if you have 1000 occurrences means that in this 60 reporting cycles of the 5 minutes reporting interval 1000 IO requests were seen that needed more than 15 seconds. That might were only a handful very long running IO requests (e.g. minimum is (1000 / 57) = 17.5 requests) or many shorter request (e.g. 1000 requests of 15 – 19 seconds). The message is misleading as it is talking of occurrences and not of IO requests, but it still points to a storage problem. Note well, however, when you see even one such message it means you had at least one event where an I/O took more than 15 SECONDs to complete. This is VERY SLOW . This indicates a problematic I/O event. But there are not so many IO requests hanging as stated in the message, but at least there are some. I hope this sheds some light on it.