azure sql
647 TopicsAzure Data Studio Retirement
We’re announcing the upcoming retirement of Azure Data Studio (ADS) on February 6, 2025, as we focus on delivering a modern, streamlined SQL development experience. ADS will remain supported until February 28, 2026, giving developers ample time to transition. This decision aligns with our commitment to simplifying SQL development by consolidating efforts on Visual Studio Code (VS Code) with the MSSQL extension, a powerful and versatile tool designed for modern developers. Why Retire Azure Data Studio? Azure Data Studio has been an essential tool for SQL developers, but evolving developer needs and the rise of more versatile platforms like VS Code have made it the right time to transition. Here’s why: Focus on innovation VS Code, widely adopted across the developer community, provides a robust platform for delivering advanced features like cutting-edge schema management and improved query execution. Streamlined tools Consolidating SQL development on VS Code eliminates duplication, reduces engineering maintenance overhead, and accelerates feature delivery, ensuring developers have access to the latest innovations. Why Transition to Visual Studio Code? VS Code is the #1 developer tool, trusted by millions worldwide. It is a modern, versatile platform that meets the evolving demands of SQL and application developers. By transitioning, you gain access to cutting-edge tools, seamless workflows, and an expansive ecosystem designed to enhance productivity and innovation. We’re committed to meeting developers where they are, providing a modern SQL development experience within VS Code. Here’s how: Modern development environment VS Code is a lightweight, extensible, and community-supported code editor trusted by millions of developers. It provides: Regular updates. An active extension marketplace. A seamless cross-platform experience for Windows, macOS, and Linux. Comprehensive SQL features With the MSSQL extension in VS Code, you can: Execute queries faster with filtering, sorting, and export options for JSON, Excel, and CSV. Manage schemas visually with Table Designer, Object Explorer, and support for keys, indexes, and constraints. Connect to SQL Server, Azure SQL (all offerings), and SQL database in Fabric using an improved Connection Dialog. Streamline development with scripting, object modifications, and a unified SQL experience. Optimize performance with an enhanced Query Results Pane and execution plans. Integrate with DevOps and CI/CD pipelines using SQL Database Projects. Stay tuned for upcoming features—we’re continuously building new experiences based on feedback from the community. Make sure to follow the MSSQL repository on GitHub to stay updated and contribute to the project! Streamlined workflow VS Code supports cloud-native development, real-time collaboration, and thousands of extensions to enhance your workflows. Transitioning to Visual Studio Code: What You Need to Know We understand that transitioning tools can raise concerns, but moving from Azure Data Studio (ADS) to Visual Studio Code (VS Code) with the MSSQL extension is designed to be straightforward and hassle-free. Here’s why you can feel confident about this transition: No Loss of Functionality If you use ADS to connect to Azure SQL databases, SQL Server, or SQL database in Fabric, you’ll find that the MSSQL extension supports these scenarios seamlessly. Your database projects, queries, and scripts created in ADS are fully compatible with VS Code and can be opened without additional migration steps. Familiar features, enhanced experience VS Code provides advanced tools like improved query execution, modern schema management, and CI/CD integration. Additionally, alternative tools and extensions are available to replace ADS capabilities like SQL Server Agent and Schema Compare. Cross-Platform and extensible Like ADS, VS Code runs on Windows, macOS, and Linux, ensuring a consistent experience across operating systems. Its extensibility allows you to adapt it to your workflow with thousands of extensions. If you have further questions or need detailed guidance, visit the ADS Retirement page. The page includes step-by-step instructions, recommended alternatives, and additional resources. Continued Support With the Azure Data Studio retirement, we’re committed to supporting you during this transition: Documentation: Find detailed guides, tutorials, and FAQs on the ADS Retirement page. Community Support: Engage with the active Visual Studio Code community for tips and solutions. You can also explore forums like Stack Overflow. GitHub Issues: If you encounter any issues, submit a request or report bugs on the MSSQL extension’s GitHub repository. Microsoft Support: For critical issues, reach out to Microsoft Support directly through your account. Transitioning to VS Code opens the door to a more modern and versatile SQL development experience. We encourage you to explore the new possibilities and start your journey today! Conclusion Azure Data Studio has served the SQL community well,but the Azure Data Studio retirement marks an opportunity to embrace the modern capabilities of Visual Studio Code. Transitioning now ensures you’re equipped with cutting-edge tools and a future-ready platform to enhance your SQL development experience. For a detailed guide on ADS retirement , visit aka.ms/ads-retirement. To get started with the MSSQL extension, check out the official documentation. We’re excited to see what you build with VS Code!3KViews2likes0CommentsPublic Preview: Shrink for Azure SQL Database Hyperscale
Update: On 29 January 2025 we announced the General Availability for shrink in Hyperscale. For more details, please read the GA announcement. If you are using Hyperscale in Azure SQL Database, you know that it is a powerful tier that lets you rapidly scale up and scale out your database according to your needs along with autoscaling storage. However, there could be situations where storage is scaled up automatically and then due to some business needs, a significant amount of data is removed/purged and a lot of free space is left within the database Today, we are pleased to announce that database and data file shrink is available in the Hyperscale tier in preview. Now you can reduce the allocated size of a Hyperscale database using the same DBCC SHRINK* commands that you might be familiar with. This allows you to reduce the size of the databases and free up unused space to save storage costs. How to use shrink in Hyperscale? Using shrink is easy and straightforward. You use the same set of commands which you might have used in other tiers of Azure SQL Database or in SQL Server. First, identify a Hyperscale database with substantial allocated but unused storage space. For definitions of allocated and used storage space, see Azure SQL Database file space management - Azure SQL Database | Microsoft Docs. Azure portal also provides this information. You can also capture the current used, allocated, and unused space in each database file by executing the following query in the database. DECLARE @NumPagesPerGB float = 128 * 1024; SELECT file_id AS FileId , size / @NumPagesPerGB AS AllocatedSpaceGB , ROUND(CAST(FILEPROPERTY(name, 'SpaceUsed') AS float)/@NumPagesPerGB,3) AS UsedSpaceGB , ROUND((size-CAST(FILEPROPERTY(name, 'SpaceUsed') AS float))/@NumPagesPerGB,3) AS FreeSpaceGB , ROUND(max_size / @NumPagesPerGB,3) AS MaxSizeGB , ROUND(CAST(size - FILEPROPERTY(name, 'SpaceUsed') AS float)*100/size,3) AS UnusedSpacePercent FROM sys.database_files WHERE type_desc = 'ROWS' ORDER BY file_id A shrink operation can be initiated using either command to shrink the entire database, or DBCC SHRINKFILE command for individual data files. We recommend using DBCC SHRINKFILE, because you can run it in parallel on multiple sessions, targeting different data files. DBCC SHRINKDATABASE is a simpler choice because you only need to run one command, but it will shrink one file at a time, which can be time-consuming for larger databases. If shrink operation fails with any error or canceled, the progress it has made so far is retained, and the same shrink command can be simply executed again to continue. Once shrink for all data files has completed, rerun the earlier query (or check in the Azure portal) to determine the resulting reduction in the allocated storage size. If there is still a large difference between used space and allocated space, you can rebuild indexes to reduce the total number of used data pages. This may temporarily increase allocated space further, however shrinking files again after rebuilding indexes should result in a higher reduction in allocated space. For more details about Azure SQL Database space management, see the following documentation article: Database file space management - Azure SQL Database | Microsoft Learn Known behaviors / limitations Database shrink is a long-running operation. For larger databases, it may span multiple days. To avoid shrink getting interrupted, we recommend using a client that is unlikely to get disconnected from the database. While shrink is running, used and allocated space for the database in the Azure portal might not be reported. Running SHRINKFILE with a target size slightly higher than the used space in the file tends to have a higher success rate compared to setting it to the exact used space. For instance, if a file is 128 GB in total size with 50 GB used and 78 GB free, setting the target size to 55 GB results in a better space reduction compared to using 50 GB. When executing DBCC SHRINKFILE concurrently on multiple files, you may encounter occasional blocking between the sessions. This is expected and does not impact the outcome of shrink. Shrinking of the transaction log file in the Hyperscale tier is not required as it does not contribute to the allocated data size and cost. Executing DBCC SHRINKFILE(2) has no effect on the transaction log size. Shrink is currently in preview mode and has the following limitations: Shrink is not allowed on unencrypted databases. Any such attempt raises the following error: Msg 49532, Level 16, State 1, Line 1 DBCC SHRINKFILE for data files is not supported in a Hyperscale database when the database is not encrypted. Enable transparent data encryption and try again. To find the encryption state of the database, execute the following query: SELECT db_name(database_id) AS 'database_name' ,encryption_state_desc FROM sys.dm_database_encryption_keys WHERE database_id = db_id() If encryption state is other than ENCRYPTED then shrink will not start. Conclusion We hope that you will find shrink useful and beneficial for your Hyperscale databases. We welcome your feedback and suggestions on how to improve it. You can contact us by adding to this blog post and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all!7.2KViews7likes19CommentsIntroducing the Enhanced Azure SQL Database Free Offer: Now Generally Available
We are thrilled to announce the general availability of our new Azure SQL Database free offer. Now, each Azure subscription includes not just one, but 10 serverless databases. Each database comes with a complimentary allocation of 100,000 vCore seconds of compute, 32 GB of data storage, and 32 GB of backup storage every month, for the lifetime of your subscription. This enhanced free offer is ideal for new Azure customers looking to develop for free, learn SQL, or create a proof of concept, as well as existing Azure customers that are considering adding another database. Get started today To learn more about the offer, see Azure SQL Database documentation. If you already have an Azure account, you can head straight to the Azure SQL Database provisioning page and select the Apply offer button in the banner at the top of the page. This offer is now live in all regions! Offer details The Azure SQL Database free offer provides access to the full capabilities of the General Purpose tier, renowned for its versatility and ability to support a wide range of workloads. Whether you're handling routine operations or running high-performance tasks, the free offer empowers you to configure your database from .5 vCore to 4 vCores, scaling up or down based on your needs. You can also benefit from the serverless option, which automatically pauses the database when idle and resumes it as soon as activity starts, reducing compute costs while maximizing your free allocation. With this free offer, every Azure subscription—whether it's pay-as-you-go, under an enterprise agreement, or part of the Microsoft Partner Network—can now include up to 10 serverless databases. The offer is designed to last for the lifetime of your subscription and refreshes monthly, giving each database a generous allocation of 100,000 vCore seconds, 32 GB of data storage, and 32 GB of backup storage, that’s a total of 1 million vCore-seconds of free compute every month per subscription. Once a region is selected for the free database under a subscription, the same region applies to all 10 free databases in that subscription and cannot be changed If you consume your monthly free resources before the end of the cycle, you have two flexible options: Auto-pause: Allow the database to pause itself and resume usage automatically when the next month's free allocation begins. Continue usage mode: Keep the database running and pay for additional usage. In this mode, you’ll continue to receive the free monthly allocation while unlocking premium capabilities, such as scaling up to 80 vCores of compute and 4 terabytes of storage. This makes it easy to start small and scale seamlessly as your business grows. A standout benefit of this free offer is its seamless transition. You can move from free usage to paid usage without any disruption—your data, schema, settings, and connections remain intact. There’s no need to migrate or reconfigure, making it effortless to grow your database as your needs evolve. Additionally, the Azure portal includes 'Free amount remaining' metrics, enabling you to monitor your consumption and manage your costs with ease. This makes the Azure SQL Database free offer an exceptional choice for developers, learners, and enterprises alike—whether you’re just starting out or preparing to scale. Develop for free The Azure SQL Database Free Offer is tailored for a wide range of users—from students and individual developers to small businesses and large enterprises—looking to develop and scale their SQL workloads at no initial cost. This offer enables you to launch up to 10 fully featured Azure SQL database, optimizing both your financial resources and developmental ambitions. Potential Use Cases: Application Development: Initiate the development of applications or websites with SQL as the core backend database. Skill Enhancement: Engage in hands-on SQL learning or refine your existing skills through practical experience and targeted tutorials. Prototyping: Craft proofs of concept or prototypes for innovative projects and ideas. Expansion and Testing: Integrate additional databases into your Azure subscription to facilitate testing or experimentation. Migration Projects: Migrate applications from on-premises setups or other cloud environments to Azure, leveraging its robust capabilities. Leveraging AI with Azure SQL Database Free Offer: Predictive Analytics: Use AI to forecast user behaviors and business outcomes, enhancing decision-making processes across various applications. Personalization Engines: Develop sophisticated personalization algorithms to enhance user experience by tailoring content and recommendations in real-time. Anomaly Detection: Implement AI to detect unusual patterns or potential threats, ensuring the security and integrity of your data. Automated Data Management: Utilize AI to automate data cleaning, transformation, and integration tasks, reducing manual overhead and increasing efficiency. Key Benefits of the Azure SQL Database Free Offer With the Azure SQL Database Free Offer, you gain the advantages of a cloud-native relational database service known for its high availability, stellar performance, security, scalability, and compatibility with SQL Server. Furthermore, you can harness the comprehensive suite of tools and services integrated with Azure SQL Database, such as Azure Data Studio, Visual Studio Code, Azure Data Factory, Azure Synapse Analytics, and Power BI. This rich ecosystem enhances your database's functionality and seamlessly connects with your existing workflows. Learn more The Azure SQL Database Free offer is available now and you can start using it today. Don’t miss this opportunity to get up to 10 free Azure SQL databases that can help you achieve your goals and grow your business. To learn more about this offer, visit the Azure SQL Database documentation. To get started, create your free database from the Azure portal by creating an Azure SQL Database resource. We hope you enjoy this new offer, and we look forward to hearing your feedback and seeing what you build with Azure SQL Database.2.1KViews3likes6CommentsShrink for Azure SQL Database Hyperscale is now generally available
Today we are thrilled to announce General Availability (GA) of the database shrink in Azure SQL Database Hyperscale. This milestone marks another significant improvement in our Hyperscale service, providing our customers with more flexibility and efficiency in managing their database storage. Overview Database shrink in Azure SQL Database allows customers to reclaim unused space within their databases to optimize storage costs. Now this is available in Hyperscale service tier too. This feature has been highly anticipated by many customers, and we are excited to deliver it with robust capabilities and seamless user experience. This improvement needs no new learnings as the same DBCC SHRINKDATABASE and DBCC SHRINKFILE commands are used. Database shrink was first announced for Hyperscale in public preview during last year. During the preview phase, we received invaluable feedback from our customers, which helped us refine and enhance this capability. We are grateful for the active participation and insights shared by the customers, which played a crucial role in shaping the feature. Key features Storage Optimization: Database shrink effectively reclaims unused space, reducing the allocated storage footprint of your database. Cost Efficiency: By optimizing storage usage, customers can potentially lower their storage costs. Ease of Use: The feature uses the same syntax as what is available in other service tiers and in SQL Server. So, customers are able to seamlessly use existing scripts, minimizing disruption during adoption. How to use To help you get started with the shrink functionality, we have provided comprehensive documentation including example scripts. Here are the basic steps to implement database shrink in your Hyperscale database: Connect to your Hyperscale database through your preferred management tool such as SQL Server Management Studio, Azure Data Studio etc. Evaluate the free space available in the database. Execute the shrink command using the provided T-SQL (DBCC SHRINKDATABASE and DBCC SHRINKFILE). Optionally, you can monitor the shrink process through DMV to ensure successful completion. Review the reclaimed space and rerun by adjusting the parameters, if necessary. Conclusion The release of database shrink in Hyperscale is a testament to our commitment to continuous improvement in Hyperscale service tier. The General Availability of database shrink in Azure SQL Database Hyperscale is a major milestone, and we are excited to see the positive impact it will have on your database management. You can contact us by adding to this blog post and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all!1KViews3likes2CommentsManaged SQL Deployments Like Terraform
Introduction This is the next post in our series on CI/CD for SQL projects. In this post we will challenge some long held beliefs on how we should manage SQL Deployments. Traditionally we've always had this notion that we should never drop data in any environment. Deployments should almost extensively be done via SQL scripts and manually ran to ensure completion and to prevent any type of data loss. We will challenge this and propose a solution that falls more in line with other modern DevOps tooling and practices. If this sounds appealing to you then let's dive into it. Why We've always approached the data behind our applications as the differentiating factor when it comes to Intellectual Property (IP). No one wants to hear the words that we've lost data or that the data is unrecoverable. Let me be clear and throw a disclaimer on what I am going to propose, this is not a substitute for proper data management techniques to prevent data loss. Rather we are going to look at a way to thread the needle on keeping the data that we need while removing the data that we don't. Shadow Data We've all heard about "shadow IT", well what about "shadow data"? I think every developer has been there. For example, taking a backup of a table/database to ensure we don't inadvertently drop it during a deployment. Heck sometimes we may even go a step further and backup this up into a lower environment. The caveat is that we very rarely ever go back and clean up that backup. We've effectively created a snapshot of data which we kept for our own comfort. This copy is now in an ungoverned, unmanaged, and potentially in an insecure state. This issue then gets compounded if we have automated backups or restore to QA operations. Now we keep amplifying and spreading our shadow data. Shouldn't we focus on improving the Software Delivery Lifecycle (SDLC), ensuring confidence in our data deployments? Let's take it a step further and shouldn't we invest in our data protection practice? Why should we be doing this when we have technology that backs up our SQL schema and databases? Another consideration, what about those quick "hot fixes" that we applied in production. The ones where we changed a varchar() column length to accommodate the size of a field in production. I am not advocating for making these changes in production...but when your CIO or VP is escalating since this is holding up your businesses Data Warehouse and you so happen to have the SQL admin login credentials...stuff happens. Wouldn't it be nice if SQL had a way to report back that this change needs to be accommodated for in the source schema? Again, the answer is in our SDLC process. So, where is the book of record for our SQL schemas? Well, if this is your first read in this series or if you are unfamiliar with source control I'd encourage you to read Leveraging DotNet for SQL Builds via YAML | Microsoft Community Hub where I talk about the importance of placing your database projects under source control. The TL/DR...your database schema definitions should be defined under source control, ideally as a .sqlproj. Where Terraform Comes In At this point I've already pointed out a few instances on how our production database instance can differ from what we have defined in our source project. This certainly isn't anything new in software development. So how does other software development tooling and technologies account for this? Generally, application code simply gets overwritten, and we have backup versions either via release branches, git tags, or other artifacts. Cloud infrastructure can be defined as Infrastructure as Code (IaC) and as such still follow something similar to our application code workflow. There are two main flavors of IaC for Azure: Bicep/ARM and Terraform. Bicep/ARM adheres to an incremental deployment, which has its pros and cons. The quick version is Azure Resource Manager (ARM) deployments will not delete resources that are not defined in its template. Part of this has led to Azure Deployment Stacks which can help enforce resource deletion when it's been removed from a template. If interested in understanding a Terraform workflow I will point you to one of my other posts on the topic. At a high level Terraform evaluates your IaC definition and determines what properties need to be updated, and more importantly, what resources need to be removed. Now how does Terraform do this and more importantly, how can we tell what properties will be updated and/or removed? Terraform has a concept known as a plan. This plan will run your deployment against what is known as the state file, in Bicep/ARM this is the Deployment Stack, and produce a summary of changes that will occur. This includes new resources to be created, modification of existing resources, and deletion of resources previously deployed to the same state file. Typically, I recommend running a Terraform plan across all environments at CI. This ensures one can evaluate changes being proposed across all potential environments and summarize these changes at the time of the Pull Request (PR). I then advise re-executing this plan prior to deployment as a way to confirm/re-evaluate if anything has been updated since the original plan ran. Some will argue the previous plan can be "approved" to deploy to the next environment; however, there is little overhead in running a second plan and I prefer this option. Here's the thing....SQL actually has this same functionality. Deploy Reports Via SqlPackage there is additional functionality we can leverage with our .dacpacs. We are going to dive a little more on Deploy Reports. If you have followed this series, you may know we use the SqlPackage Publish command wrapped behind the SqlAzureDacpacDeployment@1 task. More information on this can be found at Deploying .dapacs to Azure SQL via Azure DevOps Pipelines | Microsoft Community Hub . So, what is a Deploy Report? A Deploy Report is the XML representation of the changes your .dacpac will make to a database. Here is an example of one denoting that there is a risk of potential data loss: This report is the key to our whole argument for modeling a SQL Continuous Integration/Continous Delivery after one that Terraform uses. We already will have a separate .dacpac file, built from the same .sqlproj, for each environment when leveraging pre/post scripts as we saw in Deploying .dacpacs to Multiple Environments via ADO Pipelines | Microsoft Community Hub. So now we need to take each one of those and run a Deploy Report against the appropriate target. This is the same as effectively running a `tf plan` with a different variable file against each environment to determine what actions a Terraform `apply` will execute. These Deploy Reports are then what we will include in our PR approval to validate and approve any changes we will make to our SQL database. Dropping What's Not in Source Control This is the controversial part and the biggest sell in our adoption of a Terraform like approach to SQL deployments. It has long been considered a best practice to have whatever is deployed match what is under source control. This provides for a consistent experience when developing and then deploying across multiple environments. Within IaC, we have our cloud infrastructure defined in source control and deployed across environments. Typically, it is seen as a good practice to delete resources which have been removed from source control. This helps simplify the environment, reduces cost, and reduces potential security surface areas. So why not the same for databases? Typically, it is due to us having the fear of losing data. To prevent this, we should have proper data protection and recovery processes in place. Again, I am not addressing that aspect. If we have those accounted for, then by all means, our source control version of our databases should match our deployed environments. What about security and indexing? Again, this can be accounted for in Deploying .dacpacs to Multiple Environments via ADO Pipelines | Microsoft Community Hub. Where we have two different post deployment security scripts, and these scripts are under source control! How can we see if data loss will occur? Refer back to the Deploy Reports for this! There potentially is some natural hesitation as the default method for deploying a .dacpac has safeguards to prevent deployments in the event of potential data loss. This is not a bad thing as it prevents a destructive activity from automatically occurring; however, we by no means need to accept the default behavior. We will need to refer to SqlPackage Publish - SQL Server | Microsoft Learn. From this list we will be able to identify and explicitly set the value for various parameters. These will enable our package to deploy even in the event of potential data loss. Conclusion This post hopefully challenges the mindset we have when it comes to database deployments. By taking an approach that more closely relates to modern DevOps practices, we can gain confidence that our source control and database match, increased reliability and speed with our deployments, and we are closing potential security gaps in our database deployment lifecycle. This content was not designed to be technical. In our next post we will demo, provide examples, and talk through how to leverage YAML Pipelines to accomplish what we have outlined here. Be sure to follow me on LinkedIn for the latest publications. For those who are technically sound and want to skip ahead feel free to check out my code on my GitHub : https://github.com/JFolberth/cicd-adventureWorks and https://github.com/JFolberth/TheYAMLPipelineOneAlways Encrypted Assessment and online encryption in SQL Server Management Studio 21
Discover the new Always Encrypted Assessment feature that simplifies the encryption process for your database columns. This powerful tool evaluates your tables and columns, identifying which ones are suitable for encryption and highlighting any that aren't due to data type or constraints. With detailed insights and the ability to export results, this feature streamlines your data protection strategy. Don't miss out on learning how to make the most of this innovative addition to SQL Server Management Studio 21!911Views1like0CommentsExtending Regular Expressions (Regex) Support on Azure SQL Managed Instance (MI)
We are happy to announce the Private Preview of Regular Expressions (Regex) support on Azure SQL Managed Instance (MI). This new feature brings powerful text processing capabilities to your SQL queries, enabling you to perform complex pattern matching and data manipulation with ease. Regex support in Azure SQL The Regex feature in Azure SQL follows the POSIX standard and is compatible with the standard regex syntax and supports a variety of regex functions, such as REGEXP_LIKE, REGEXP_COUNT, REGEXP_INSTR, REGEXP_REPLACE, and REGEXP_SUBSTR. The feature also supports case sensitivity, character classes, quantifiers, anchors, and capturing groups. Here are the functions and features supported: REGEXP_LIKE: Checks if a string matches a regular expression pattern. REGEXP_COUNT: Returns the number of times a pattern occurs in a string. REGEXP_INSTR: Returns the position of the first or the last (based on the specified option) occurrence of a pattern in a string. REGEXP_REPLACE: Replaces occurrences of a pattern in a string with another string. REGEXP_SUBSTR: Extracts a substring that matches a regular expression pattern. To start using Regex in Azure SQL MI, simply include the relevant Regex functions in your SQL queries. To learn more about the Regex functions, please visit this blog: https://aka.ms/regex-prpr-blog Getting Started To start using Regex feature in Azure SQL MI, please ensure to select “Always-up-to-date” update policy on the Additional Settings tab of instance create portal blade to get access to all new SQL engine features as soon as they are available in Azure. Join the Preview The Regex feature is currently in private preview in Azure SQL Database and Azure SQL Managed Instance (MI). If you are interested to participate in the private preview and try out the regex feature, please fill out this form: https://aka.ms/regex-preview-signup For more details, you can refer this blog - https://aka.ms/regex-prpr-blog, Feedback We value your feedback and suggestions as we continue to improve and enhance Azure SQL. Please share your thoughts and experiences about Regex feature with us through https://aka.ms/sqldbregex-feedback Thank you for your continued support. We look forward to seeing how you leverage Regex to simplify and enhance your data processing tasks.240Views0likes0Comments