azure sql database
459 TopicsEverything you need to know about TDE key management for database restore
Transparent data encryption (TDE) in Azure SQL with customer-managed key (CMK) supports Bring Your Own Key (BYOK) for data protection at rest and facilitates separation of duties in key and data management. With customer-managed TDE, the user manages the lifecycle of keys (creation, upload, rotation, deletion), usage permissions, and auditing of key operations. The key used for encrypting the Database Encryption Key (DEK), known as the TDE protector, is an asymmetric key managed by the customer and stored in Azure Key Vault (AKV). Once a database is encrypted with TDE using a Key Vault key, new backups are encrypted with the same TDE protector. Changing the TDE protector does not update old backups to use the new protector. To restore a backup encrypted with a Key Vault TDE protector, ensure that the key material is accessible to the target server. The TDE feature was designed with the requirement that both the current and previous TDE protectors are necessary for successful restores. It is recommended to retain all previous versions of the TDE protector in the key vault to enable the restore of database backups. This blog post will provide detailed information on which keys should be available for a database restore and the reasons why they are necessary. Encryption of the transaction log file To understand which keys are required for a point-in-time restore, it is necessary to first explain how the transaction log encryption operates. The SQL Server Database Engine divides each physical log file into several virtual log files (VLFs). Each VLF has its own header. Encrypting the entire log file in one single sweep is not possible, so each VLF is encrypted individually and the encryptor information is stored in the VLF header. When the log manager needs to read a particular VLF for recovery, it uses the encryptor information in the VLF header to locate the encryptor and decrypt the VLF. Unencrypted transaction log Consider the following sequence of blocks as the logical log file, where each block represents a Virtual Log File (VLF). Initially, we are in VLF1, and the current Log Sequence Number (LSN) is within VLF1. Transparent data encryption enabled When TDE is enabled on the database, the current VLF is filled with non-operational log records, and a new VLF (VLF2) is created. Each VLF has one header containing the encryptor information, so whenever the encryptor information changes, the log rolls over to the next VLF boundary. The subsequent VLF will contain the new DEK (DEK_1) and the thumbprint of the encryptor of the DEK in the header. Any additions to the log file will be added to VLF2 and will be encrypted. When VLF2 reaches capacity, a new VLF (VLF3) will be generated. Since encryption is enabled, the new VLF will contain the DEK and its information in its header, and it will also be encrypted. Key rotation When a new DEK is generated or its encryptor changes, the log rolls over to the next VLF boundary. The new VLF (VLF4) will contain the updated DEK and encryptor information. For example, if a new DEK (DEK_2) is generated via key rotation in the Azure Portal, VLF3 will fill with non-operational commands before VLF4 is created and encrypted by the new DEK. A database can use multiple keys at a single time Currently, for server and database level CMK, after a key rotation, some of the active VLFs may still be encrypted with an old key. As key rotations are allowed before these VLFs are flushed, the database can end up using multiple keys simultaneously. To ensure that at a certain point in time, the database is using only the current encryption protector (primary generation p) and the old encryption protector (generation p-1) we used the following approach: Block a protector change operation when there is any active VLF using an old thumbprint different from the current encryption protector. When a customer attempts a protector change or the key is being auto rotated, we will verify if there are any VLFs using the old thumbprint that are "active". If so, the protector change will be blocked. If there are no "active VLFs" using the old thumbprint, we take a log backup to flush the inactive VLFs, then rotate the protector and wait for it to fully rotate. This approach ensures that the database will use 2 keys at any given time. Example Time t0 = DB is created without encryption Time t1 = DB is protected by Thumbprint A Time t2 = DB protector is rotated to Thumbprint B Time t3 = Customer requests a protector change to Thumbprint C We check the active VLFs, they are using Thumbprint A and we block the change This ensures that currently the DB is only using Thumbprint A and Thumbprint B Time t4 = Customer requests a protector change to Thumbprint C We check the active VLFs, and none of them are using thumbprint A. We solicit a log backup, that should flush the inactive VLFs using thumbprint A Then we rotate the encryption protector and succeed the operation only when both (b) and (c) are fully complete This approach ensures that after time t4, the DB is only using Thumbprint B and Thumbprint C Point-in-time restore Based on the provided information, it is evident that multiple keys are necessary for a restore if key rotations have taken place. To restore a backup encrypted with a TDE protector from Azure Key Vault, ensure that the key material is accessible to the target server. Therefore, we recommend that you keep the old versions of the TDE protector in the Azure Key Vault, so database backups can be restored. To mitigate it, run the Get-AzSqlServerKeyVaultKey cmdlet for the target server or Get-AzSqlInstanceKeyVaultKey for the target managed instance to return the list of available keys and identify the missing ones. To ensure all backups can be restored, make sure the target server for the restore has access to all of keys needed. These keys don't need to be marked as TDE protector. Backed up log files remain encrypted with the original TDE protector, even if it was rotated and the database is now using a new TDE protector. At restore time, both keys are needed to restore the database. If the log file is using a TDE protector stored in Azure Key Vault, this key is needed at restore time, even if the database has been changed to use service-managed TDE in the meantime. Point-in-time restore example When a customer wants to restore data to a specific point in time (tx), they will need the current encryption protector (p) and the old encryption protector (p-1) from the period [tx-8 days] to [tx]. The reason for using tx-8 is that there is a full backup every 7 days, so we expect to have a complete backup within the last 8 days. Because VLFs may remain active with the earlier key, the system is designed to use the two latest thumbprints (p-2 and p-3) from outside the buffer period. Consider the following timeline: The PITR request is made for 8/20/2025 (tx), at which point Thumbprint D (p) is active. To ensure we have a full backup, we subtract 8 days, bringing us to 8/12/2025 (tx-8), when Thumbprint C (p-1) is active. Since VLFs might still be active with the previous key, we also need Thumbprint B (p-2) and Thumbprint A (p-3). The required thumbprints for this point-in-time restore are A, B, C and D. Conclusion To restore a backup encrypted with a Key Vault TDE protector, it is essential to ensure that the key material is accessible to the target server. It is recommended to retain all old versions of the TDE protector in the key vault to facilitate the restore of database backups.96Views1like0CommentsIntroducing the Azure SQL hub: A simpler, guided entry into Azure SQL
Choosing the right Azure SQL service can be challenging. To make this easier, we built the Azure SQL hub, a new home for everything related to Azure SQL in the Azure portal. Whether you’re new to Azure SQL or an experienced user, the hub helps you find the right service quickly and decide, without disrupting your existing workflows. For existing users: Your current workflows remain unchanged. The only visible update is a streamlined navigation pane where you access Azure SQL resources. For new users: Start from the Azure SQL hub home page. Get personalized recommendations by answering a few quick questions or chatting with Azure portal Copilot. Or compare services side by side and explore key resources, all without leaving the portal. This is one way to find it: Searching for "azure sql" in main search box or marketplace is also efficient way to get to Azure SQL hub Answer a few questions to get our recommendation and use Copilot to refine your requirements. Get a detailed side-by-side comparison without leaving the hub. Still deciding? Explore a selection of Azure SQL services for free. This option takes you straight to the resource creation page with a pre-applied free offer. Try the Azure SQL hub today in the Azure portal, and share your feedback in the comments!370Views3likes0CommentsPreparing for the Deprecation of TLS 1.0 and 1.1 in Azure Databases
Microsoft announced the retirement of TLS 1.0 and TLS 1.1 for Azure Services, including Azure SQL Database, Azure SQL Managed Instance, Cosmos DB, and Azure Database for MySQL by August 31, 2025. Customers are required to upgrade to a more secure minimum TLS protocol version of TLS 1.2 for client-server connections to safeguard data in transit and meet the latest security standards. The retirement of TLS 1.0 and 1.1 for Azure databases was originally scheduled for August 2024. To support customers in completing their upgrades, the deadline was extended to August 31, 2025. Starting August 31, 2025, we will force upgrade servers with minimum TLS 1.0 or 1.1 to TLS 1.2, and connections using TLS 1.0 or 1.1 will be disallowed, and connectivity will fail. To avoid potential service interruptions, we strongly recommend customers complete their migration to TLS 1.2 before August 31, 2025. Why TLS 1.0 and 1.1 Are Being Deprecated TLS (Transport Layer Security) protocols are vital in ensuring encrypted data transmission between clients and servers. However, TLS 1.0 and 1.1, introduced in 1999 and 2006 respectively, are now outdated and vulnerable to modern attack vectors. By retiring these versions, Microsoft is taking a proactive approach to enhance the security landscape for Azure services such as Azure databases. Security Benefits of Upgrading to TLS 1.2 Enhanced encryption algorithms: TLS 1.2 provides stronger cryptographic protocols, reducing the risk of exploitation. Protection against known vulnerabilities: Deprecated versions are susceptible to attacks such as BEAST, POODLE, and others TLS 1.2 addresses. Compliance with industry standards: Many regulations, including GDPR, PCI DSS, and HIPAA, mandate the use of secure, modern TLS versions. How to Verify and Update TLS Settings for Azure Database Services For instructions on how to verify your Azure database is configured with minimum TLS 1.2 or upgrade the minimum TLS setting to 1.2, follow the respective guide below for your database service. Azure SQL Database and Azure SQL Managed Instance The Azure SQL Database and SQL Managed Instance minimum TLS version setting allows customers to choose which version of TLS their database uses. Azure SQL Database To identify clients that are connecting to your Azure SQL DB using TLS 1.0 and 1.1, SQL audit logs must be enabled. With auditing enabled you can view client connections: Connectivity settings - Azure SQL Database and SQL database in Fabric | Microsoft Learn To configure the minimum TLS version for your Azure SQL DB using the Azure portal, Azure PowerShell or Azure CLI: Connectivity settings - Azure SQL Database and SQL database in Fabric | Microsoft Learn Azure SQL Managed Instance To identify clients that are connecting to your Azure SQL MI using TLS 1.0 and 1.1, auditing must be enabled. With auditing enabled, you can consume audit logs using Azure Storage, Event Hubs or Azure Monitor Logs to view client connections: Configure auditing - Azure SQL Managed Instance | Microsoft Learn To configure the minimum TLS version for your Azure SQL MI using Azure PowerShell or Azure CLI: Configure minimal TLS version - managed instance - Azure SQL Managed Instance | Microsoft Learn Azure Cosmos Database The minimum service-wide accepted TLS version for Azure Cosmos Database is TLS 1.2, but this selection can be changed on a per account basis. To verify the minimum TLS version of the minimalTlsVersion property on your Cosmos DB account: Self-serve minimum tls version enforcement in Azure Cosmos DB - Azure Cosmos DB | Microsoft Learn To configure the minimum TLS version for your Cosmos DB account using the Azure Portal, Azure PowerShell, Azure CLI or Arm Template: Self-serve minimum tls version enforcement in Azure Cosmos DB - Azure Cosmos DB | Microsoft Learn Azure Database for MySQL Azure Database for MySQL supports encrypted connections using TLS 1.2 by default, and all incoming connections with TLS 1.0 and TLS 1.1 are denied by default, though users are allowed to change the setting. To verify the minimum TLS version configured for your Azure DB for MySQL server tls_version server parameter using the MySQL command-line interface: Encrypted Connectivity Using TLS/SSL - Azure Database for MySQL - Flexible Server | Microsoft Learn To configure the minimum TLS version for your Azure DB for MySQL server using the MySQL command-line interface: Configure Server Parameters - Azure Portal - Azure Database for MySQL - Flexible Server | Microsoft Learn If your database is currently configured with a minimum TLS setting of TLS 1.2, no action is required. Conclusion The deprecation of TLS 1.0 and 1.1 marks a significant milestone in enhancing the security of Azure databases. By transitioning to TLS 1.2, users can ensure highly secure encrypted data transmission, compliance with industry standards, and robust protection against evolving cyber threats. Upgrade to TLS 1.2 now to prepare for this change and maintain secure, compliant database connectivity settings.3.5KViews0likes0CommentsReplication lag metric for Azure SQL DB is now in public preview
Azure SQL Database offers business continuity capabilities to recover quickly from regional disasters. Features such as active geo-replication and failover groups provide continuous replication of the data in your primary database to a secondary database in a different Azure region. In the event of a regional disruption, these features allow you to perform quick disaster recovery to your secondary database to meet your business' recovery time objective (RTO) and recovery point objective (RPO). RTO for Azure SQL Database is typically less than 60 seconds, but RPO depends on the amount of data changes before the disruptive event that have not been replicated. Consequently, monitoring the replication lag between the primary and secondary databases is critical in ensuring your RPO goals are met. Until now, the main way to measure the replication lag between the primary and secondary databases was with the replication_lag_sec column of the dynamic management view (DMV), sys.dm_geo_replication_link_status from your primary database. With the introduction of the Replication lag metric, you can now monitor lag with respect to RPO in near real time in the Azure portal in addition to using the DMV. Replication lag is a new Azure monitor metric that is emitted at a one-minute frequency and stored up to 93 days. You can visualize the metric in Azure monitor and set up alerts too. The replication lag metric measures the time span in seconds from the point of transaction commit on the primary database and acknowledgement by the secondary database that the transaction log update has been persisted. The replication lag metric is applicable for a database in DTU or vCore purchasing model and in all service tiers (Basic, Standard, Premium, General Purpose, Business Critical & Hyperscale). Both singleton and elastic pool deployments are supported. You can monitor the metric by adding Replication lag (preview) from your primary database in the portal as shown below: The metric provides three dimensions, Partner Database Name, Partner Server Name, and Partner Resource ID that you can use to further filter or split the data to view specific replication links. If your database is configured to send Metrics to Log Analytics under “Diagnostic settings”, you can also query the Replication lag metric data as shown below: Next Steps Learn more about monitoring geo-replication and other commonly used metrics in Azure SQL Database. See how you can achieve your business continuity goals with Azure SQL Database using Active geo-replication and Failover groups. Prepare for disasters with the Disaster recovery checklist Frequently Asked Questions What is the Replication lag and what does it measure? The Replication lag is a new metric in Azure monitor that measures the time span in seconds between the transactions committed on the primary and hardened to the transaction log on the secondary. How do you view the metric in the portal? In the Azure portal, select your primary SQL database and under "Monitoring", select "Metrics". In the "Metrics" dropdown, choose "Replication lag (preview)". What is the granularity of the Replication lag metric? One minute. What is the latency in displaying the Replication lag data in the Metrics screen? Typically, the latency to display the replication lag is less than three minutes. Are there dimensions available for the Replication lag metric? Yes, there are three dimensions available for the metric - Partner Database Name, Partner Server Name, and Partner Resource ID. These dimensions can be used for filtering and splitting the view in the Metrics screen for easier comparison of multiple secondary geo-replicas.423Views2likes4CommentsBreaking Limits: Full-Length SQL Audit Statements Now Supported for Azure SQL DB
We’re excited to announce a major enhancement to SQL auditing in Azure SQL Database, the removal of the 4,000-character truncation limit for audit records. What’s Changed? Previously, audit records were limited to storing 4,000 characters for fields like statement and data_sensitivity_information. Any content beyond that threshold was truncated, potentially leaving out critical details from lengthy queries or sensitive data traces. With this update, statements are no longer truncated. Audit logs now capture the entire content of auditable actions, ensuring complete visibility and traceability for security and compliance teams. Why It Matters Improved Security & Compliance: Full-length statements provide better context for threat detection and forensic analysis. Enhanced Customer Experience: No more guessing what was left out—customers get the full picture. Feature Parity with SQL Server and Managed Instance: This update brings Azure SQL Database in line with SQL Server and Managed Instance capabilities, ensuring consistency across environments.371Views1like1CommentEnhanced Server Audit for Azure SQL Database: Greater Performance, Availability and Reliability
We are excited to announce a significant update to the server audit feature for Azure SQL Database. We have re-architected major portions of SQL Auditing resulting in increased availability and reliability of server audits. As an added benefit, we have achieved closer feature alignment with SQL Server and Azure SQL Managed Instance. Database auditing remains unchanged. In the remainder of this blog article, we cover Functional changes Changes Affecting customers Sample queries Call for action Implementation and Notification Time-based Filtering Functional Changes In the current design when server audit is enabled, it triggers a database level audit and executes one audit session for each database. With the new architecture, enabling server audit will create one extended event session at the server level that captures audit events for all databases. This optimizes memory and CPU and is consistent with how auditing works in SQL Server and Azure SQL Managed Instance. Changes Affecting Customers Folder Structure change for storage account Folder structure change for Read-Only replicas Permissions required to view Audit logs One of the primary changes involves the folder structure for audit logs stored in storage account containers. Previously, server audit logs were written to separate folders, one for each database, with the database name serving as the folder name. With the new update, all server audit logs will be consolidated into a single folder which is ‘Master’ folder. This behavior is the same as Azure SQL Managed Instance and SQL Server For Read-Only database replicas, which previously had their logs stored in a read-only folder, those logs will now also be written into the Master folder. You can retrieve these logs by filtering on the new column ‘is_secondary_replica_true’. Please note that the audit logs generated after deployment will adhere to the new folder structure, while the existing audit logs will stay in their current folders until their retention periods expire. Sample Queries To help you adopt these changes in your workflows, here are some sample queries: Current New To Query audit logs for a specific database called "test" SELECT * FROM sys.fn_get_audit_file ('https://testaudit.blob.core.windows.net/sqldbauditlogs/auditpoc/test/ SqlDbAuditing_ServerAudit_NoRetention/2023-01-29/07_06_40_590_0.xel', default, default) SELECT * FROM sys.fn_get_audit_file ('https://testaudit.blob.core.windows.net/sqldbauditlogs/auditpoc/master/SqlDbAuditing_ServerAudit_NoRetention/2023-01-29/07_06_40_590_0.xel', default, default) WHERE database_name = 'test'; To query audit logs for test database from read only replica SELECT * FROM sys.fn_get_audit_file ('https://testaudit.blob.core.windows.net/sqldbauditlogs/auditpoc/test/SqlDbAuditing_ServerAudit_NoRetention/2023-01-29/RO/07_06_40_590_0.xel', default, default) SELECT * FROM sys.fn_get_audit_file ('https://testaudit.blob.core.windows.net/sqldbauditlogs/auditpoc/master/SqlDbAuditing_ServerAudit_NoRetention/2023-01-29/07_06_40_590_0.xel', default, default) WHERE is_secondary_replica_true = 'true'; Permissions Control database on user database VIEW Database SECURITY AUDIT permission in User Databases Implementation and Notifications We are rolling out this change region-wise. Subscription owners will receive notifications with the subject “Update your scripts to point to a new folder for server level audit logs” for each region as the update is implemented. It is important to update any scripts that refer to the folder structure to retrieve audit logs based on the database name for the specific region. Note that this change applies only to server-level auditing; database auditing remains unchanged. Call for Action These actions apply only to customers who are using storage account targets. No action is needed for customers using Log Analytics or Event hubs. Folder references: Change the reference for audit logs from the database name folder to the Master folder and use specific filters to retrieve logs for a required database. Read -Only Database Replicas: Update references for audit logs from the Read-Only replica folder to the Master folder and filter using the new parameter as shown in the examples. Permissions: Ensure you have the necessary control server permissions to review the audit logs for each database using fn_get_audit_file. Manual Queries This update also applies to manual queries where you use fn_get_audit_file to retrieve audit logs from the storage account Time-based filtering To enhance your ability to query audit logs using filters, consider using efficient time-based filtering with the fn_get_audit_file_v2 function. This function allows you to retrieve audit log data with improved filtering capabilities. For more details, refer to the official documentation here.2.1KViews2likes2CommentsMSSQL Extension for VS Code: Schema Compare, Schema Designer, Local SQL Server Container GA
The MSSQL Extension for VS Code continues to evolve, delivering features that make SQL development more visual, more consistent, and more developer-friendly. In version v1.35.0, we’re announcing the General Availability (GA) of Schema Designer, Schema Compare, and Local SQL Server Containers — three powerful tools that bring structure, clarity, and flexibility to your local development workflow. What’s new in MSSQL extension for VS Code v1.35 This release introduces three major capabilities designed to streamline the SQL development experience: Schema Compare (General Availability) — Compare schemas across databases or database projects and apply changes with confidence. The intuitive diff view helps you track differences and update objects with just a few clicks. Schema Designer (General Availability) — Visually design and manage your database schema. Create or edit tables, relationships, and constraints through an interactive UI—perfect for code-first or hybrid development workflows. Local SQL Server Container (General Availability) — Provision local SQL Server containers directly from the extension. Run SQL Server 2025 with AI-ready features, configure ports and versions, and manage containers without touching Docker's CLI or desktop app. In addition to these major features, this release includes multiple quality and performance improvements: Fixed an issue where Microsoft Entra ID sign-in in the Connection Dialog could result in empty account or tenant dropdowns Improved performance and usability of the query results grid, including fixes for export and display issues Improved localization in Object Explorer and other UI elements Fixed multiple accessibility issues affecting error messages and visual feedback Addressed edge case errors in GitHub Copilot Agent Mode when switching connections Schema Compare (GA) Schema Compare was announced Public Preview in April this year, featuring a powerful new addition to streamline your database development workflow. With Schema Compare, you can effortlessly compare database schemas, pinpoint differences, and apply updates seamlessly between databases or files. Today, we're excited to announce that Schema Compare is now Generally Available (GA). This milestone delivers significant usability improvements driven directly by community feedback, making the feature more intuitive and user-friendly. Bug fixes and refinements: Schema Compare file saves options, switch direction enabled without populated comparison, show profile names, remember previous type selection. Enhanced option settings now include include/exclude functionality. Visual and usability enhancements Object view enhanced UI for listing names. Dropdown option styling for lighter themes. Here's Schema Compare in action, showing how easy and intuitive it is to discover and apply changes to your schemas: Learn More: 🎥 Watch the demo - aka.ms/vscode-mssql-schema-compare-demo 📝 Microsoft Learn Doc - aka.ms/vscode-mssql-docs/schema-compare Schema Designer (GA) Schema Designer was announced Public Preview in June this year, introducing an interactive database diagramming experience in VS Code. With it, you can view, design, and modify database schemas visually - without writing T-SQL code. Today, we’re excited to share that Schema Designer is now Generally Available (GA). This milestone reflects months of community feedback, with a strong focus on improving both reliability and usability. Fast loading and performance improvements Visual and usability enhancements Collapse/expand button for tables with many columns. Foreign-key icons to clearly distinguish them from other columns. Relationship visibility after filtering, so you can still see underlying schema connections when focusing on a subset. Tooltips on truncated table/column names to reveal the full name on hover. Bug fixes and refinements: Restored relationship-line visibility, support for self-referencing foreign keys, enhanced auto-arrange behavior for diagrams, clearer export options. Here’s how easy it is to view and design your schema, with tables connected by lines to represent table relationships: Learn more: 🎥 Watch the demo - aka.ms/vscode-mssql-docs/schema-designer 📝 Microsoft Learn Doc - aka.ms/vscode-mssql-schema-designer-demo Local SQL Server Container (GA) You can now create and manage SQL Server containers locally—without writing a single Docker command. The Local SQL Server Container experience in the MSSQL extension is now generally available, making it easier than ever to spin up a fully configured SQL Server instance for development, testing, and prototyping. By default, the container wizard uses SQL Server 2025 (Public Preview), which includes native support for vector data types, enhanced JSON functions, and other AI-ready features—making it ideal for building modern, intelligent applications locally. Key highlights Auto-connect: A connection profile is automatically created and ready to use Lifecycle controls: Start, stop, restart, or delete containers from the connection panel Docker environment checks: Get notified if Docker isn’t running or installed Port conflict detection: If port 1433 (the default SQL Server port) is already in use, the extension will automatically find and assign the next available port for your container. Custom settings: Define container name, hostname, and port via UI Other versions supported: You can also choose to run a SQL Server 2022, 2019, or 2017 container. This release also brings small but impactful usability improvements, including: Progress indicator during image download: Users now see clear visual feedback when the SQL Server container image is being pulled, reducing confusion during first-time setup or slow network conditions Step-by-step Docker checks: The “Getting Docker Ready” section now shows each prerequisite check (e.g., Docker installed, Docker running) in a sequential and expanded view, with live status indicators for better transparency Containers remember the last-used version, streamlining repeated testing workflows Auto-scrolling logs keep container progress visible as it happens Improved UX around port conflict handling to make setup more predictable and reliable Other updates Beyond the headline features, version 1.35.0 delivers a set of updates that improve day-to-day development workflows: Improved Microsoft Entra ID sign-in experience: Fixed an issue that could cause blank account or tenant dropdowns in the Connection Dialog More responsive query results: Enhanced performance and fixed bugs in the query grid, including export and display issues New Text View mode: You can now view query results as plain text for quick scanning or copy-pasting Session-based SQL Auth: The extension now remembers SQL Authentication passwords during your current VS Code session (until restart) Improved localization: UI elements such as Object Explorer now better respect VS Code language settings GitHub Copilot stability: Addressed edge case errors that could occur when switching database connections during a chat session Conclusion The v1.35 release marks a major milestone with the general availability of Schema Designer, Schema Compare, and the Local SQL Server Container. Making the MSSQL extension more complete, visual, and local than ever. Combined with quality-of-life improvements across performance, accessibility, and query results, this update continues to simplify the way developers build and manage SQL Server databases inside Visual Studio Code. If there’s something you’d love to see in a future update, here’s how you can contribute: 💬 GitHub discussions – Share your ideas and suggestions to improve the extension ✨ New feature requests – Request missing capabilities and help shape future updates 🐞 Report bugs – Help us track down and fix issues to make the extension more reliable Want to see these features in action? Schema Designer demo Schema Compare demo Local SQL Server Container demo Full playlist of demos Thanks for being part of the journey—happy coding! 🚀239Views0likes0CommentsAzure SQL Firewall / Locks
Hi there, I have 2 environments. I'm more of admin on Azure environment (recently made as subscription admin) after which Dev issue - Azure SQL I'm having difficulty to remove IP from Azure SQL Firewall. (Earlier i was able to) today my manager granted me subscription admin and as SQL Security Manager and it still not able to remove grayed out IPs.38Views0likes0Comments