sql server on azure vms
84 TopicsIntroducing the Azure SQL hub: A simpler, guided entry into Azure SQL
Choosing the right Azure SQL service can be challenging. To make this easier, we built the Azure SQL hub, a new home for everything related to Azure SQL in the Azure portal. Whether you’re new to Azure SQL or an experienced user, the hub helps you find the right service quickly and decide, without disrupting your existing workflows. For existing users: Your current workflows remain unchanged. The only visible update is a streamlined navigation pane where you access Azure SQL resources. For new users: Start from the Azure SQL hub home page. Get personalized recommendations by answering a few quick questions or chatting with Azure portal Copilot. Or compare services side by side and explore key resources, all without leaving the portal. This is one way to find it: Searching for "azure sql" in main search box or marketplace is also efficient way to get to Azure SQL hub Answer a few questions to get our recommendation and use Copilot to refine your requirements. Get a detailed side-by-side comparison without leaving the hub. Still deciding? Explore a selection of Azure SQL services for free. This option takes you straight to the resource creation page with a pre-applied free offer. Try the Azure SQL hub today in the Azure portal, and share your feedback in the comments!1.3KViews3likes0CommentsAnnouncing General Availability of UNISTR function and ANSI SQL || Operator in Azure SQL
We’re excited to announce the General Availability (GA) of two long-standing capabilities that address critical needs for SQL developers and enterprise customers in Azure SQL Database and Azure SQL Managed Instance, configured with the Always-up-to-date update policy: UNISTR function for Unicode character representation ANSI SQL string concatenation operator (`||`) for standard-compliant string operations These additions improve ANSI SQL compliance in Azure SQL, streamline migration from other platforms, and boost developer productivity. UNISTR function: Unicode made easy The UNISTR function provides support for Unicode string literals by letting you specify the Unicode encoding value of characters in the string, making it especially useful for working with international or special characters. If you want to include characters like emojis, accented letters, or symbols from non-Latin scripts, UNISTR lets you encode them directly using their Unicode values. Returns Unicode characters based on the input expression, following the Unicode standard. Supports escape sequences: \xxxx for UTF-16 codepoints. \+xxxxxx for full Unicode codepoints. Enables easy integration of complex Unicode characters into strings. You can look up Unicode codepoint values in the Unicode Code Charts. Comparison with NCHAR NCHAR: Converts a single Unicode value to a character. UNISTR: Handles multiple Unicode values and escape sequences, offering greater flexibility for constructing strings with diverse characters. Syntax UNISTR ( 'character_expression' [ , 'unicode_escape_character' ] ) The data type of character_expression could be char, nchar, varchar, or nvarchar. For char and varchar data types, the collation should be a valid UTF-8 collation only. A single character representing a user-defined Unicode escape sequence. If not supplied, the default value is \. Examples Example #1: In this example, Unicode value used with text literal: SELECT UNISTR (N'I \2764 Azure SQL.') AS string_with_unicode Results: I ❤️ Azure SQL. Example #2: In this example, we construct a non-Latin script string using Unicode escape sequence. SELECT UNISTR(N'\3053\3093\306B\3061\306F') AS Japanese_Greeting; Results: こんにちは ANSI SQL || Operator: A cleaner way to concatenate The || operator is now supported for string concatenation, offering a more readable and standard-compliant alternative to existing methods like + and CONCAT(). The || operator concatenates two or more characters or binary strings, columns, or a combination of strings and column names into one expression. The || operator does not honor the SET CONCAT_NULL_YIELDS_NULL option and always behaves as if the ANSI SQL behavior is enabled, yielding NULL if any of the inputs is NULL. This operator will work with character strings or binary data of any supported SQL Server collation. The ||operator supports compound assignment ||= similar to +=. If the operands are of incompatible collation, then an error will be thrown. The collation behavior is identical to the CONCAT function of character string data. Syntax expression || expression The expression is a character or binary expression. Both expressions must be of the same data type, or one expression must be able to be implicitly converted to the data type of the other expression. If one operand is of binary type, then an unsupported operand type error will be thrown. Examples Example #1: For example, the following query concatenates two strings and returns the result: SELECT 'Hello ' || ' World!'; Results: Hello World! Example #2: In this example, multiple character strings are concatenated. If at least one input is a character string, non-character strings will be implicitly converted to character strings. SELECT -- Combine first and last name 'Josè' || ' Doe' AS full_name, -- Construct a detailed order string with ID, timestamp, and unique identifier 'Order-' || CAST(1001 AS VARCHAR) || '~TS~' || current_timestamp || '~' || NEWID() AS order_details, -- Handle NULL safely in concatenation 'Item-' || NULL AS item_desc; Results: full_name order_details item_desc Josè Doe Order-1001~TS~Jul 29 2025 8:33PM~0387F607-1664-4EEB-A4E8-F16B396DDD73 NULL Example #3: In this example, the use of || operator is unsupported when used with only non-character types or when combining binary data with other types. -- Attempting to concatenate two numeric literals: 1 and 2 SELECT 1 || 2; -- Attempting to concatenate integer, string, and binary literals SELECT 1 || 'a' || 0x4e; Above queries will fail with error messages as below – The data types int and int are incompatible in the concat operator. The data types varchar and varbinary are incompatible in the concat operator. Comparison with + and CONCAT() Let’s explore how || differs from + and CONCAT() in Azure SQL using real examples. Sample queries: -- Binary + String SELECT CONCAT(0x01, 'a'); -- Returns: a SELECT 0x01 || 'a'; -- Returns: Error SELECT 0x01 + 'a'; -- Returns: Error -- Binary + Binary SELECT CONCAT(0x01, 0xff); -- Returns: ÿ SELECT 0x01 || 0xff; -- Returns: 0x01FF SELECT 0x01 + 0xff; -- Returns: 0x01FF -- NULL Handling SELECT CONCAT(NULL, 'a'); -- Returns: 'a' SELECT NULL || 'a'; -- Returns: NULL SELECT NULL + 'a'; -- Returns: NULL -- Numeric + String SELECT CONCAT(1, 'a'); -- Returns: '1a' SELECT 1 || 'a'; -- Returns: '1a' SELECT 1 + 'a'; -- Returns: Error -- Date + String SELECT CONCAT(CURRENT_TIMESTAMP, 'a'); -- Returns: Jul 29 2025 9:29PMa SELECT CURRENT_TIMESTAMP || 'a'; -- Returns: Jul 29 2025 9:29PMa SELECT CURRENT_TIMESTAMP + 'a'; -- Returns: Error Key Differences Feature + Operator CONCAT() Function || Operator ANSI SQL Standard No Yes Yes NULL Handling Returns NULL Ignores NULL/ Empty string Returns NULL Type Safety Not type-safe Type-safe Type-safe Readability Moderate Verbose High Portability Low Moderate High Why these matters Easier Migration – Migrating from other databases becomes smoother with support for UNISTR and ||, reducing the need for syntax rewrites. Global Reach – UNISTR simplifies Unicode handling, making it easier to build apps for international markets. Cleaner Code – The || operator improves readability and aligns with modern SQL practices. Learn more UNISTR (Transact-SQL) – SQL Server | Microsoft Learn || (String Concatenation) (Transact-SQL) – SQL Server | Microsoft Learn ||= (Compound assignment) (Transact-SQL) – SQL Server | Microsoft Learn Conclusion In this blog post, we announced the General Availability (GA) of UNISTR function and ANSI SQL string concatenation operator (||) in Azure SQL. The UNISTR function allows you to escape Unicode characters, making it easier to work with international text. ANSI SQL || operator provides a simple and intuitive way to combine characters or binary data. These enhancements reflect our commitment to making Azure SQL the most developer-friendly, standards-compliant cloud-first database platform. Whether you’re modernizing legacy systems, building multilingual apps, or writing cleaner SQL, these features are designed to make your journey smoother. We hope you will explore these enhancements, apply them in your projects, and share your feedback with us to help us continue improving. Thank you!232Views0likes0CommentsImmutability: A Powerful Shield Against Ransomware in SQL Environments
In today’s digital landscape, ransomware attacks are among the most disruptive and costly threats organizations face. These attacks often target critical data—encrypting or deleting it to extort payment. One of the most effective strategies to mitigate this risk is immutability. 🧱 What Is Immutability? Immutability means that once data is written, it cannot be altered or deleted for a defined period. This concept is widely used in software engineering (e.g., immutable objects in programming) but has become a cornerstone in data protection and cybersecurity. In storage and backup systems, immutability ensures that even if an attacker gains access, they cannot tamper with or erase the protected data. 🛡️ How Immutability Protects Against Ransomware Ransomware typically works by: Gaining access to systems. Encrypting or deleting data. Demanding ransom for restoration. With immutable backups or storage: Encrypted or deleted data can be restored from a clean, untouchable copy. Attackers cannot modify or delete immutable data, even with elevated privileges. Organizations can recover quickly without paying the ransom. Immutability with Azure storage Azure storage supports immutability for blob storage and is compliant with industry requirements in this space. 🗃️Using Azure immutable storage for SQL Server backups SQL Server backups can be written to Azure storage using the BACKUP TO URL T-SQL command. However, writing backups to immutable storage is not supported in the current versions of SQL Server 2022 and prior. This is because of the way SQL Server writes the backups to the .bak file. It first creates the .bak file with zeros and then updates it with the data. By definition of immutability, the file once written cannot be edited or deleted. Hence, the current two step backup process does not allow writing to Azure immutable storage. You will see the following error when you attempt to write a backup to immutable storage with SQL Server 2022. With SQL Server 2025 CTP 2.1 Preview, instead of the current two-steps to create and write the .bak file, everything is done in one step. This change only applies when writing to Azure blob storage, specifically when backing to BlockBlobs, and does not impact other targets such as disk or S3-compatible object storage. Immutability in action Following is an end-to-end scenario of configuring immutability for Azure storage container and creating a native SQL backup file to it using SQL Server 2025 CTP 2.1 Preview. Configure immutability for Azure storage container Navigate to your storage account and then to the desired container under Data storage Select the More button on the right for the desired container and select “Access policy” Click on Add policy Select Time-based retention from the Policy type Enter a number for the retention. It is advised to choose a very small number such as 1-2 days for testing purposes. Leave the “Enable version-level immutability” unchecked. For testing purposes, we highly recommend not to Lock the immutable policy as that will prevent you from making any modifications. Leave the default of “Allow protected append writes to” to None Select Save After saving, go back to More > Access policy on the storage container to verify immutability. You should see a container scoped, time-based immutable policy with the configured retention period. Retrieve the SAS token – from the storage container, select More, and then select “Generate SAS” with relevant Permissions specified. For example: From the generated SAS token, copy the token Switch to SQL Server Management Studio Connect to the SQL Server 2025 CTP 2.1 Preview instance In a New Query window, create the Credential as follows IF NOT EXISTS (SELECT * FROM sys.credentials WHERE name = 'https://dnstorage1.blob.core.windows.net/immcontainer') CREATE CREDENTIAL [https://dnstorage1.blob.core.windows.net/immcontainer] WITH IDENTITY = 'SHARED ACCESS SIGNATURE', SECRET = '<token>'; Enable the Trace flag to switch to the single write workflow DBCC TRACEON(3012,-1) Perform a database backup as follows: BACKUP DATABASE WideWorldImporters TO URL = 'https://dnstorage1.blob.core.windows.net/immcontainer/WideWorldImporters.bak'; GO The backup should succeed, as follows: Some details to note: Formerly one could see a 0-byte blob when the backup process starts. However, in the new workflow the blob would not be visible under your storage account in the Azure portal until the backup has completed successfully. If the backup fails mid-way, you may not see any blob in the container in the Azure portal, but it may exist in an uncommitted state. If you retry without specifying the “FORMAT” option, the backup statement may fail indicating that a blob already exists at the specified path, and you should use the “WITH FORMAT” option. If you retry using the “WITH FORMAT” option, the backup workflow will be able to proceed. Once a backup has been successfully completed, use of “FORMAT” option becomes inconsequential till the point the backup is protected by the immutable policy as nobody can delete/overwrite the blob. The example used a time-based retention policy but the same result can be achieved using Legal Hold for immutability. Azure immutable storage has more details on time-based vs legal hold types of immutability. Alternatively, you can also follow the instructions at Code examples to create a Shared Access Signature, create a credential and perform a full database backup. Verifying immutability Once the backup is created, you can verify the immutability of the backup by attempting to modify/delete the backup. Modifying the file by attempting to overwrite to the same file: I attempted to re-run the same command to overwrite the backup file, and I get the following error: Adding WITH FORMAT: Delete the immutable backup: From Azure storage container, I tried to delete the backup file I just created: As you can see, once written, the backup files cannot be modified or deleted until the configured retention period expires. 🔚 Final Thoughts Immutability is not just a buzzword—it's a critical defense mechanism in the fight against ransomware. For SQL databases, where data integrity and availability are paramount, implementing immutable backups and storage can mean the difference between a minor incident and a catastrophic loss. By embracing immutability, organizations can ensure that their most valuable data remains safe, recoverable, and resilient—no matter what threats come their way.838Views3likes1CommentAzure Data Studio Retirement
We’re announcing the upcoming retirement of Azure Data Studio (ADS) on February 6, 2025, as we focus on delivering a modern, streamlined SQL development experience. ADS will remain supported until February 28, 2026, giving developers ample time to transition. This decision aligns with our commitment to simplifying SQL development by consolidating efforts on Visual Studio Code (VS Code) with the MSSQL extension, a powerful and versatile tool designed for modern developers. Why Retire Azure Data Studio? Azure Data Studio has been an essential tool for SQL developers, but evolving developer needs and the rise of more versatile platforms like VS Code have made it the right time to transition. Here’s why: Focus on innovation VS Code, widely adopted across the developer community, provides a robust platform for delivering advanced features like cutting-edge schema management and improved query execution. Streamlined tools Consolidating SQL development on VS Code eliminates duplication, reduces engineering maintenance overhead, and accelerates feature delivery, ensuring developers have access to the latest innovations. Why Transition to Visual Studio Code? VS Code is the #1 developer tool, trusted by millions worldwide. It is a modern, versatile platform that meets the evolving demands of SQL and application developers. By transitioning, you gain access to cutting-edge tools, seamless workflows, and an expansive ecosystem designed to enhance productivity and innovation. We’re committed to meeting developers where they are, providing a modern SQL development experience within VS Code. Here’s how: Modern development environment VS Code is a lightweight, extensible, and community-supported code editor trusted by millions of developers. It provides: Regular updates. An active extension marketplace. A seamless cross-platform experience for Windows, macOS, and Linux. Comprehensive SQL features With the MSSQL extension in VS Code, you can: Execute queries faster with filtering, sorting, and export options for JSON, Excel, and CSV. Manage schemas visually with Table Designer, Object Explorer, and support for keys, indexes, and constraints. Connect to SQL Server, Azure SQL (all offerings), and SQL database in Fabric using an improved Connection Dialog. Streamline development with scripting, object modifications, and a unified SQL experience. Optimize performance with an enhanced Query Results Pane and execution plans. Integrate with DevOps and CI/CD pipelines using SQL Database Projects. Stay tuned for upcoming features—we’re continuously building new experiences based on feedback from the community. Make sure to follow the MSSQL repository on GitHub to stay updated and contribute to the project! Streamlined workflow VS Code supports cloud-native development, real-time collaboration, and thousands of extensions to enhance your workflows. Transitioning to Visual Studio Code: What You Need to Know We understand that transitioning tools can raise concerns, but moving from Azure Data Studio (ADS) to Visual Studio Code (VS Code) with the MSSQL extension is designed to be straightforward and hassle-free. Here’s why you can feel confident about this transition: No Loss of Functionality If you use ADS to connect to Azure SQL databases, SQL Server, or SQL database in Fabric, you’ll find that the MSSQL extension supports these scenarios seamlessly. Your database projects, queries, and scripts created in ADS are fully compatible with VS Code and can be opened without additional migration steps. Familiar features, enhanced experience VS Code provides advanced tools like improved query execution, modern schema management, and CI/CD integration. Additionally, alternative tools and extensions are available to replace ADS capabilities like SQL Server Agent and Schema Compare. Cross-Platform and extensible Like ADS, VS Code runs on Windows, macOS, and Linux, ensuring a consistent experience across operating systems. Its extensibility allows you to adapt it to your workflow with thousands of extensions. If you have further questions or need detailed guidance, visit the ADS Retirement page. The page includes step-by-step instructions, recommended alternatives, and additional resources. Continued Support With the Azure Data Studio retirement, we’re committed to supporting you during this transition: Documentation: Find detailed guides, tutorials, and FAQs on the ADS Retirement page. Community Support: Engage with the active Visual Studio Code community for tips and solutions. You can also explore forums like Stack Overflow. GitHub Issues: If you encounter any issues, submit a request or report bugs on the MSSQL extension’s GitHub repository. Microsoft Support: For critical issues, reach out to Microsoft Support directly through your account. Transitioning to VS Code opens the door to a more modern and versatile SQL development experience. We encourage you to explore the new possibilities and start your journey today! Conclusion Azure Data Studio has served the SQL community well,but the Azure Data Studio retirement marks an opportunity to embrace the modern capabilities of Visual Studio Code. Transitioning now ensures you’re equipped with cutting-edge tools and a future-ready platform to enhance your SQL development experience. For a detailed guide on ADS retirement , visit aka.ms/ads-retirement. To get started with the MSSQL extension, check out the official documentation. We’re excited to see what you build with VS Code!30KViews4likes23CommentsIntroducing "Backups on Secondary" for SQL Server Always On Availability Groups with SQL Server 2025
We’re excited to announce a major enhancement for SQL Server Always On Availability Groups Backups on Secondary, in SQL Server 2025. Until SQL Server 2022, you could only perform COPY_ONLY full backups and transaction log backups on a secondary replica of an Always On Availability Group. This enhancement in SQL Server 2025 allows you to offload all types of backups—full, differential, and transaction logs - to a secondary replica, significantly improving performance, resource utilization, and operational flexibility. What Is "Backups on Secondary"? Traditionally, backups in an Always On Availability Group were typically performed on the primary replica, which could lead to resource contention and performance degradation for mission-critical workloads. With this update in SQL Server 2025 to the Backups on Secondary feature, you can now configure your environment to perform all backup operations on a designated secondary replica. This includes: Full backups Differential backups Transaction log backups Key Benefits 🔄 Reduced Load on Primary Replica By offloading backup operations to a secondary replica, you free up CPU, memory, and I/O resources on the primary replica, ensuring that your production workloads run more smoothly and efficiently. 🧩 Flexible Backup Strategies You can now design more flexible and resilient backup strategies by leveraging secondary replicas in different geographic locations or data centers. 🛡️ Enhanced High Availability and Disaster Recovery In the event of a failover, backup operations can continue seamlessly on another secondary replica, ensuring continuous data protection. How It Works The feature is easy to configure using T-SQL or SQL Server Management Studio (SSMS). There are two steps to make backups run on secondary: Configure backups to run on secondary replica via the AUTOMATED_BACKUP_PREFERENCE and BACKUP_PRIORITY parameters – refer to Configure backups on secondary replicas of an Always On availability group for prerequisites and detailed steps. Run the T-SQL BACKUP command using one of the tools such as SSMS or SQL Agent jobs or Maintenance plans. These are the same steps that have been currently in place for SQL Server 2022 and for prior versions. In the prior versions this configuration only allowed COPY_ONLY and transaction log backups. Now, starting with SQL Server 2025, with the same configuration in place, you can perform FULL, DIFFERENTIAL and T-LOG backups on a secondary replica and truly offload the backup overhead on to the secondary replica. You can also query system views like sys.dm_hadr_backup_is_preferred_replica to programmatically determine the preferred replica for backups. Final Thoughts The Backups on Secondary feature is a game-changer for organizations looking to optimize their SQL Server environments for performance, availability, and cost. Whether you're managing a large-scale enterprise deployment or a hybrid cloud setup, this feature gives you the flexibility and control you need to build a more efficient and resilient data platform. This feature is available in SQL Server 2025 CTP 2.0. You can download it and try it out today.1.9KViews4likes3CommentsZSTD compression in SQL Server 2025
Introducing ZSTD Compression in SQL Server 2025: A Leap Forward in Data Efficiency With the release of SQL Server 2025 Preview, we are introducing a powerful new feature that promises to significantly enhance data storage efficiency: support for the Zstandard (ZSTD) compression algorithm. This open-source compression technology, originally developed by Yann Collet at Facebook, is now being integrated into SQL Server as a modern alternative to the long-standing MS_XPRESS compression algorithm. What is ZSTD? ZSTD is a fast, lossless compression algorithm that offers a compelling balance between compression ratio and speed. It has gained widespread adoption across the tech industry due to its: High compression ratios — often outperforming legacy algorithms like MS_XPRESS. Fast decompression speeds — ideal for read-heavy workloads. Scalability — tunable compression levels to balance CPU usage and storage savings. Compression in SQL Server: Then and Now SQL Server has long supported data compression to reduce storage costs and improve I/O performance. The MS_XPRESS algorithm, used in row and page compression, has served well for many years. However, as data volumes grow and performance demands increase, a more efficient solution is needed. Enter ZSTD. Key Benefits of ZSTD in SQL Server 2025 Improved Compression Ratios In internal benchmarks, ZSTD has shown up to 30–50% better compression compared to MS_XPRESS, depending on the data type and structure. This translates to significant storage savings, especially for large data warehouses and archival systems. Faster Decompression ZSTD’s optimized decompression path ensures that queries on compressed data remain fast, even as compression ratios improve. Flexible Compression Levels SQL Server 2025 allows administrators to choose from multiple ZSTD compression levels, enabling fine-tuned control over the trade-off between CPU usage and compression efficiency. Seamless Integration ZSTD is integrated into the existing compression framework, meaning you can apply it using familiar T-SQL syntax and management tools. How to Use ZSTD in SQL Server 2025 Using ZSTD is as simple as specifying it in your compression options. Examples: Default compression of MS_XPRESS: Specify ZSTD during BACKUP: BACKUP DATABASE <database_name> TO {DISK|TAPE|URL} WITH COMPRESSION (ALGORITHM = ZSTD) Specify ZSTD and compression level: BACKUP DATABASE <database_name> TO {DISK|TAPE|URL} WITH COMPRESSION (ALGORITHM = ZSTD, LEVEL=HIGH) The default compression level is LOW. Allowed values are LOW, MEDIUM and HIGH. As you would expect, specifying HIGH compression level takes longer compared to LOW compression level. How to verify which compression algorithm was used: You can use the RESTORE HEADERONLY command to verify which compression algorithm was used during the backup. The CompressionAlgorithm column will display the name of the algorithm. Comparing compression levels: A quick look at the file sizes shows the ZSTD does show some savings. Additionally, a higher level of compression is achieved when LEVEL=HIGH is specified. Final Thoughts While the actual savings from the new compression algorithm varies based on the nature of data and amount of resources available, in general we have seen significant improvements in compression. The integration of ZSTD into SQL Server 2025 marks a significant step forward in data compression technology. Whether you're managing terabytes of transactional data or optimizing your data lake for analytics, ZSTD offers a modern, efficient, and high-performance solution. Looking forward to hearing your feedback. SQL Server 2025 brings a ton of new features and capabilities. Refer to Whats new in SQL 2025 for all the excitement. Cheers Dinakar2KViews2likes1CommentUnlocking the Power of Regex in SQL Server
Regular expressions, or regex, are powerful tools for text manipulation and pattern matching. They are incredibly useful in various scenarios, such as data validation, extraction, and transformation. SQL Server 2025 introduces native support for regex functions, making it easier to perform complex text operations directly within SQL queries. This Regex support is available in SQL Server 2025 public preview, Azure SQL Database, SQL DB in Fabric, and Azure SQL Managed Instance. The table-valued functions (TVFs) are currently available on SQL Server 2025 public preview however they will follow on other offerings as well in the coming weeks, once deployment is complete. Advantages of using Regex in SQL queries Regular expressions offer several advantages when used in SQL queries: Flexibility in Pattern Matching: Regular expressions allow for complex and flexible pattern matching, making it easier to search for specific text patterns within large datasets. Efficiency in Data Manipulation: Regex functions can efficiently manipulate and transform text data, reducing the need for multiple string functions and simplifying queries. Improved Data Validation: Regular expressions can be used to validate data formats, ensuring that data conforms to specific patterns, such as email addresses, phone numbers, and zip codes. Enhanced Data Extraction: Regex functions can extract specific parts of text data, making it easier to retrieve valuable information from unstructured data. Standardization and Cleaning: Regex can help standardize and clean data by removing unwanted characters, whitespace, and duplicates. Regex functions – REGEXP_LIKE The REGEXP_LIKE function checks if a string matches a pattern. It’s useful for data validation and filtering rows based on specific patterns. This function can be used in the WHERE/ HAVING clause, CHECK constraints, CASE expressions or conditional statements etc. Example: Scenario #1: This query finds all employees with valid email addresses. SELECT [Name], Email FROM Employees WHERE REGEXP_LIKE(Email, '[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$'); Scenario #2: Create Employees table with CHECK constraints on ‘Email’ and ‘Phone_Number’ columns, to ensure data meets the specified criteria. DROP TABLE IF EXISTS Employees CREATE TABLE Employees ( ID INT IDENTITY(101,1), [Name] VARCHAR(150), Email VARCHAR(320) CHECK (REGEXP_LIKE(Email, '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$')), Phone_Number NVARCHAR(20) CHECK (REGEXP_LIKE (Phone_Number, '^(\d{3})-(\d{3})-(\d{4})$')) ); REGEXP_COUNT The REGEXP_COUNT function counts the number of times a pattern appears in a string. It’s useful for analyzing text data and identifying patterns. Example: This query counts the number valid email addresses for each domain. SELECT REGEXP_SUBSTR(Email, '@(.+)$', 1, 1,'c',1) AS Domain, COUNT(*) AS Valid_email_count FROM Employees WHERE REGEXP_COUNT(Email, '^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$') > 0 GROUP BY REGEXP_SUBSTR(Email, '@(.+)$', 1, 1,'c',1); REGEXP_INSTR The REGEXP_INSTR function finds the position of a pattern in a string. It’s useful for locating specific substrings within text data. Example: This query finds the position of the @ sign in each employee’s email address. SELECT Name, Email, REGEXP_INSTR(Email, '@') AS Position_of_@ FROM Employees; REGEXP_REPLACE The REGEXP_REPLACE function returns a modified string replaced by a ‘replacement string’, where occurrence of the regular expression pattern found. You can use it to modify or transform text data based on a regex pattern. Example: This query will redact sensitive data of variable length and format from a table that contains personal information like Social Security Numbers (SSNs) and credit card numbers. SELECT sensitive_info, REGEXP_REPLACE(sensitive_info, '(\d{3}-\d{2}-\d{4}|\d{4}-\d{4}-\d{4}-\d{4})', '***-**-****') AS redacted_info FROM personal_data; REGEXP_SUBSTR The REGEXP_SUBSTR function extracts a part of a string that matches a pattern. It’s useful for extracting specific information from text data. Example: This query extracts the domain part of valid email addresses. SELECT Name, Email, REGEXP_SUBSTR(Email, '@(.+)$', 1, 1, 'i', 1) AS Domain FROM Employees WHERE REGEXP_LIKE(Email, '^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$'); REGEXP_MATCHES The REGEXP_MATCHES function finds all the matches of a pattern in a string and returns them in a tabular format (i.e. as rows and columns). It’s useful for extracting multiple occurrences of a pattern. It is typically used to find all occurrences of a pattern or to retrieve parts of a string using capture groups, especially when you expect multiple matches per input. Example: This query finds all key-value pairs including substring-matches from the string and returns them in tabular format. SELECT * FROM REGEXP_MATCHES('Name: John Doe; Email: john.doe@example.com; Phone: 123-456-7890', '(\w+):\s*([^;]+)'); Results: match_id start_position end_position match_value substring_matches 1 1 14 Name: John Doe [{“value”:”Name”,”start”:1,”length”:4},{“value”:”John Doe”,”start”:7,”length”:8}] 2 17 43 Email: john.doe@example.com [{“value”:”Email”,”start”:17,”length”:5},{“value”:”john.doe@example.com”,”start”:24,”length”:20}] 3 46 64 Phone: 123-456-7890 [{“value”:”Phone”,”start”:46,”length”:5},{“value”:”123-456-7890″,”start”:53,”length”:12}] REGEXP_SPLIT_TO_TABLE The REGEXP_SPLIT_TO_TABLE function splits a string based on a pattern as the delimiter and returns the fragments as rows in a table. It’s useful for splitting strings with a list of items or breaking down text into words or sentences. Example: This query splits the string into parts based on semicolon and space characters and returns them in a tabular format. SELECT * FROM REGEXP_SPLIT_TO_TABLE('Name: John Doe; Email: john.doe@example.com; Phone: 123-456-7890', '; '); Results: value ordinal Name: John Doe 1 Email: john.doe@example.com 2 Phone: 123-456-7890 3 If your current database compatibility level is below 170 and you have a use-case where you want to leverage the REGEXP_LIKE function but need additional time to perform the complete analysis and testing before switching the compatibility level, you can use REGEXP_COUNT or REGEXP_INSTR functions as an interim solution. Example: SELECT [Name], Email FROM (VALUES('John Doe', 'john@contoso.com'), ('Alice Smith', 'alice@fabrikam.com'), ('Bob Johnson', 'bob@fabrikam.net'), ('Charlie Brown', 'charlie@contoso.co.in')) as e (Name, Email) WHERE REGEXP_COUNT(e.email, '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$') > 0 --OR REGEXP_INSTR(e.email, '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$') > 0; Availability All the intrinsic functions and TVFs are available in SQL Server 2025 public preview. The intrinsic functions are also available in Azure SQL Database, SQL DB in Fabric, and Azure SQL Managed Instance. The TVFs will follow in the future once deployment is completed. To learn more about the feature, please visit – learn page. Conclusion Regular expressions in SQL Server are powerful tools that can make your data tasks easier and more efficient. By bringing regex support natively, SQL Server lets you push more of these tasks into the database layer, which can simplify application code and enable new kinds of queries. Start simple with these functions, test your patterns, and soon you’ll be slicing and dicing strings in SQL with ease. We encourage you to explore these functions and provide your valuable feedback. Stay tuned for more updates and enhancements in the future. Thank you for being a part of our community and helping us make Azure SQL and SQL Server the best platform for your data needs!234Views0likes0CommentsMSSQL Extension for VS Code: New UI Goes GA and GitHub Copilot Enters Preview
The SQL development experience is taking a major leap forward with the MSSQL Extension for VS Code. The MSSQL extension is evolving to meet the needs of modern developers, bringing powerful, intelligent, and intuitive capabilities directly into your daily workflow. With this release, we’re announcing the general availability of the enhanced UI and the public preview of GitHub Copilot integration. Together, these updates streamline how developers connect to databases, write queries, and manage schema objects—whether you’re working locally with SQL Server 2025 or in the cloud with Azure SQL or SQL Database in Fabric. As part of our broader effort, this release continues to transform SQL development in VS Code. While the new Schema Designer debuts alongside these updates, we’ll cover it separately in an upcoming post. A modern SQL development experience, now generally available The enhanced UI in the MSSQL extension—first introduced in preview and made default in v1.30—is now officially generally available. Over the past several months, these experiences have been refined based on community feedback to deliver a faster, more intuitive way to work with SQL in Visual Studio Code. What’s included in the GA release: Connection Dialog: Quickly connect to local or cloud databases using parameters, connection strings, or Azure browsing. Easily access saved and recent connections. Object Explorer: Navigate complex database hierarchies with advanced filtering by object type, name, and schema. Table Designer: Visually build or update tables, define relationships and constraints, and publish schema changes with a T-SQL preview. Query Results Pane: Export, sort, and inspect query results in-grid or in a new tab. Includes Estimated and Actual Execution Plan buttons for performance analysis. Query Plan Visualizer: Explore query execution plans with zoom, metrics, and node-level insights to help you identify and resolve performance bottlenecks. As of this release, these features no longer require preview settings or feature flags. In other words, if you’re already using the extension, the new UI is available immediately upon update. GitHub Copilot is now integrated with the MSSQL extension (Preview) In parallel with the UI GA release, GitHub Copilot integrates with the MSSQL extension for Visual Studio Code. This integration brings AI-assisted development into your SQL workflows. Available as a Public Preview, this integration helps developers write, understand, and optimize SQL code faster—whether you’re working with raw T-SQL or modern ORMs. Since it’s available as a Public Preview, you can start using it right away. Importantly, we have designed this experience specifically with developers in mind—especially those who work code-first or may not have deep T-SQL expertise. GitHub Copilot adapts to your database schema and open files to offer contextual suggestions and explanations. What you can do with GitHub Copilot: Chat with mssql: Ask natural language questions to generate queries, explain logic, scaffold tables, or debug stored procedures—all grounded in your connected database. Inline Suggestions: Get real-time completions while writing SQL or ORM code, including Sequelize, Prisma, SQLAlchemy, and Entity Framework. Schema Design and Exploration: Create, update, and reverse-engineer schemas using conversational or code-based prompts. Query Optimization: Receive AI-driven suggestions to refactor slow queries, improve indexing, and analyze execution plans. Understand Business Logic: Let GitHub Copilot explain stored procedures, views, and functions—ideal for onboarding or working with legacy code. Security Analyzer: Identify vulnerable patterns like SQL injection and get safer alternatives in context. Mock and Test Data Generation: Automatically generate sample data based on your schema for prototyping and testing. GitHub Copilot actively uses your database connection and open files to deliver tailored assistance. To get the most out of it, connect to a database and work within SQL or ORM files. For additional guidance, check out the official documentation or watch the demo video to see GitHub Copilot in action. Get started with GitHub Copilot It’s easy to try the enhanced UI and GitHub Copilot integration in the MSSQL extension—no setup scripts, no configuration needed. Follow these steps: Install or update the MSSQL extension for Visual Studio Code. Connect to any database, local or cloud (SQL Database in Fabric, Azure SQL, or SQL Server 2025 (Public Preview) or prior). If you have a GitHub Copilot subscription, sign in. That’s it—Copilot works automatically based on your connected database and active SQL or ORM files. To start chatting, right-click any database in the Object Explorer and select “Chat with this database.” This opens a connected chat session with the Azure SQL Copilot agent, ready to assist with queries, schema design, optimization, and more. Need more information on how to get started with the MSSQL extension in VS Code. Check out the official documentation for detailed information and quickstarts on every feature, or catch our latest livestream on the VS Code YouTube channel. Conclusion This release marks a significant step forward in the SQL developer experience inside VS Code—bringing a modern, streamlined UI and AI-assisted capabilities together in a single tool built for developers. As we continue evolving the extension, your feedback plays a critical role. If you try GitHub Copilot with the MSSQL extension, we’d love to hear from you: 🤖 GitHub Copilot feedback form - Share your experience using GitHub Copilot with the MSSQL extension 💬 GitHub discussions - Share your ideas and suggestions to improve the extension This is just the beginning—we’re building a modern SQL development experience for real-world workflows, and your input helps drive what comes next. Happy coding!238Views0likes0CommentsSeamless end-to-end SQL Server migration to Azure with Azure Arc
Migrating your on-premises SQL Server to Azure used to require multiple tools and involve several disconnected steps. We have addressed these challenges with an integrated all-in-one migration experience for Arc-enabled SQL Servers. Our new solution eliminates the need for additional software or tools, requiring only Arc-enablement of your SQL Server to complete the entire end-to-end migration journey. We refer to this experience as a journey because the migration process can span several days or even weeks. Our solution manages every step along the way, allowing you the flexibility to pick up where you left off at any time. About the solution The Arc-enabled migration integrates all steps of the migration journey into a single, simple-to-use experience. The solution starts by providing an overview of the benefits of Azure SQL services and modernizing your SQL Server in Azure. It offers continuous automated assessments of your SQL Server databases, providing recommendations for migration to various Azure SQL destinations. Based on these recommendations, an appropriate Azure SQL destination is suggested, tailored to your workload needs. Thereafter, you can choose to provision the recommended Azure SQL service in Azure and start the migration process. Throughout the process, you can monitor the ongoing migrations, evaluate data replicated in Azure, and control the cutover point to Azure according to your business requirements. Figure 1: Integrated Arc enabled end to end migrations experience. Note: Functionality, look and feel of preview product experiences are subject to change. This release is limited to migrating SQL Server databases to Azure SQL Managed Instance only using the link feature as one of the best performing minimum downtime migration solutions. It does not provide other migration options or destinations at this time. Hands-on We love hearing back from our customers! Your participation in the private preview and working with the product group can influence the product roadmap. If you're interested in evaluating your SQL Server workloads for migration to Azure or are ready to migrate, please fill out our application form to request an invitation to the private preview: https://aka.ms/arc-migrations-preview Our product team will select candidates on an ongoing basis based on onboarding capacity. Additional resources Migration overview from SQL Server to Azure SQL Server enabled by Azure Arc2.2KViews3likes0CommentsManaged Instance link with SQL Server 2017 is now GA
We are announcing the general availability of Managed Instance link feature with SQL Server 2017, which enables near-real time data replication from SQL Server to Azure SQL Managed Instance. Link feature is now supported in all SQL Server versions in the mainstream and extended support, from SQL Server 2016 to SQL Server 2022. To use Managed Instance link feature SQL Server 2017, customers need to install “Azure Connect Pack for SQL Server 2017”. We recommend the latest version of SQL Server Management Studio to create and manage links with SQL Server 2017. To learn more about SQL Server – SQL Managed Instance hybrid capabilities which link feature unlocks, see the feature documentation page.3.2KViews1like0Comments