sql server
392 TopicsStream data in near real time from SQL to Azure Event Hubs - Public preview
If near-real time integration is something you are looking to implement and you were looking for a simpler way to get the data out of SQL, keep reading. SQL is making it easier to integrate and Change Event Streaming is a feature continuing this trend. Modern applications and analytics platforms increasingly rely on event-driven architectures and real-time data pipelines. As the businesses speed up, real time decisioning is becoming especially important. Traditionally, capturing changes from a relational database requires complex ETL jobs, periodic polling, or third-party tools. These approaches often consume significant cycles of the data source, introduce operational overhead, and pose challenges with scalability, especially if you need one data source to feed into multiple destinations. In this context, we are happy to release Change Event Streaming ("CES") feature into Public Preview for Azure SQL Database. This feature enables you to stream row-level changes - inserts, updates, and deletes - from your database directly to Azure Event Hubs in near real time. Change Event Streaming addresses the above challenges by: Reducing latency: Changes are streamed (pushed by SQL) as they happen. This is in contrast with traditional CDC (change data capture) or CT (change tracking) based approaches, where an external component needs to poll SQL at regular intervals. Traditional approaches allow you to increase polling frequency, but it gets difficult to find a sweet spot between minimal latency and minimal overhead due to too frequent polls. Simplifying architecture: No need for Change Data Capture (CDC), Change Tracking, custom polling or external connectors - SQL streams directly to configured destination. This means simpler security profile (fewer authentication points), fewer failure points, easier monitoring, lower skill bar to deploy and run the service. No need to worry about cleanup jobs, etc. SQL keeps track of which changes are successfully received by the destination, handles the retry logic and releases log truncation point. Finally, with CES you have fewer components to procure and get approved for production use. Decoupling: The integration is done on the database level. This eliminates the problem of dual writes - the changes are streamed at transaction boundaries, once your source of truth (the database) has saved the changes. You do not need to modify your app workloads to get the data streamed - you tap right onto the data layer - this is useful if your apps are dated and do not possess real-time integration capabilities. In case of some 3rd party apps, you may not even have an option to do anything other than database level integration, and CES makes it simpler. Also, the publishing database does not concern itself with the final destination for the data - Stream the data once to the common message bus, and you can consume it by multiple downstream systems, irrespective of their number or capacity - the (number of) consumers does not affect publishing load on the SQL side. Serving consumers is handled by the message bus, Azure Event Hubs, which is purpose built for high throughput data transfers. onceptually visualizing data flow from SQL Server, with an arrow towards Azure Event Hubs, from where a number of arrows point to different final destinations. Key Scenarios for CES Event-driven microservices: They need to exchange data, typically thru a common message bus. With CES, you can have automated data publishing from each of the microservices. This allows you to trigger business processes immediately when data changes. Real-time analytics: Stream operational data into platforms like Fabric Real Time Intelligence or Azure Stream Analytics for quick insights. Breaking down the monoliths: Typical monolithic systems with complex schemas, sitting on top of a single database can be broken down one piece at a time: create a new component (typically a microservice), set up the streaming from the relevant tables on the monolith database and tap into the stream by the new components. You can then test run the components, validate the results against the original monolith, and cutover when you build the confidence that the new component is stable. Cache and search index updates: Keep distributed caches and search indexes in sync without custom triggers. Data lake ingestion: Capture changes continuously into storage for incremental processing. Data availability: This is not a scenario per se, but the amount of data you can tap into for business process mining or intelligence in general goes up whenever you plug another database into the message bus. E.g. You plug in your eCommerce system to the message bus to integrate with Shipping providers, and consequently, the same data stream is immediately available for any other systems to tap into. How It Works CES uses transaction log-based capture to stream changes with minimal impact on your workload. Events are published in a structured JSON format following the CloudEvents standard, including operation type, primary key, and before/after values. You can configure CES to target Azure Event Hubs via AMQP or Kafka protocols. For details on configuration, message format, and FAQs, see the official documentation: Feature Overview CES: Frequently Asked Questions Get Started Public preview CES is available today in public preview for Azure SQL Database and as a preview feature in SQL Server 2025. Private preview CES is also available as a private preview for Azure SQL Managed Instance and Fabric SQL database: you can request to join the private preview by signing up here: https://aka.ms/sql-ces-signup We encourage you to try the feature out and start building real-time integrations on top of your existing data. We welcome your feedback—please share your experience through Azure Feedback portal or support channels. The comments below on this blog post will also be monitored, if you want to engage with us. Finally, CES team can be reached via email: sqlcesfeedback [at] microsoft [dot] com. Useful resources Free Azure SQL Database. Free Azure SQL Managed Instance.360Views0likes0CommentsGeneral Availability announcement of Backup/Restore capabilities in SQL Server 2025
Over the past several months, we’ve heard from countless customers who are eager for more robust options to protect, compress, and safeguard their SQL Server data. Since introducing these features in public preview, organizations of all sizes have validated their value in real-world workloads and provided invaluable feedback. Today, we are thrilled to announce the General Availability (GA) of three powerful SQL Server 2025 features: Backups on Secondary for Always On Availability Groups, ZSTD Compression, and Immutable Backups for Ransomware Protection. These advancements are now ready for production use, built around the needs and requests of the SQL Server customers. Backups on Secondary for SQL Server Always On Availability Groups Previously in preview, this feature now reaches GA, allowing you to offload backup operations to secondary replicas in Always On Availability Groups. This enhancement optimizes resource utilization and minimizes overhead on primary replicas, ensuring better performance for mission-critical workloads. What’s New in GA: Improved reliability and support for production environments. Key Benefits: Comprehensive support: Full, differential, and transaction log backups are now fully supported on secondary replicas—not just COPY_ONLY and transaction logs. Reduced impact on primary replica performance. Simplified high-availability strategies. Learn more from the original announcement: Introducing Backups on Secondary for SQL Server Always On Availability Groups. ZSTD Compression in SQL Server 2025 The GA of ZSTD compression brings modern, efficient data compression to SQL Server. ZSTD compression introduces industry-leading performance and efficiency, letting you save storage and speed up workloads. What’s New in GA: Full production support for ZSTD across key workloads. Key Benefits: Faster compression and decompression compared to legacy algorithms. Lower storage footprint without sacrificing performance. Choose your algorithm: ZSTD is now a standard option right alongside MS_XPRESS for row, page, and backup compression. Tunable compression levels: Administrators can select from LOW, MEDIUM, or HIGH to balance resource use and savings. Ideal for large-scale data environments. Explore the preview details: ZSTD Compression in SQL Server 2025. Backups to immutable storage: A Powerful Shield Against Ransomware Your backups are now safer than ever. Thanks to native support for immutability with Azure Blob Storage, backup files can be rendered tamper-proof—protecting them from ransomware or even accidental deletion. Key Benefits: Strong defense against ransomware and malicious tampering. Compliance with regulatory requirements for data integrity. Peace of mind for critical backup strategies. Read the detailed use-case and how-to: Immutability: A Powerful Shield Against Ransomware in SQL Environments. These features collectively empower organizations to: Optimize performance and resource utilization. Reduce operational costs through efficient compression. Strengthen security posture against evolving threats. Get Started Today These features are available to all SQL Server 2025 customers. Ready to elevate your data protection, efficiency, and compliance posture? Access the official SQL Server 2025 documentation for step-by-step guides via visit Microsoft Learn. Review upgrade guidance and best practices. Explore real-world configurations and FAQs. Upgrade now and unlock the next level of resilience, efficiency, and security for your SQL Server workloads!160Views1like0CommentsGeneral Availability Announcement: Regex Support in SQL Server 2025 & Azure SQL
We’re excited to announce the General Availability (GA) of native Regex support in SQL Server 2025 and Azure SQL — a long-awaited capability that brings powerful pattern matching directly into T-SQL. This release marks a significant milestone in modernizing string operations and enabling advanced text processing scenarios natively within the database engine. What is Regex? The other day, while building LEGO with my 3-year-old — an activity that’s equal parts joy and chaos — I spent minutes digging for one tiny piece and thought, “If only Regex worked on LEGO.” That moment of playful frustration turned into a perfect metaphor. Think of your LEGO box as a pile of data — a colorful jumble of tiny pieces. Now imagine trying to find every little brick from a specific LEGO set your kid mixed into the pile. That’s tricky — you’d have to sift through each piece one by one. But what if you had a smart filter that instantly found exactly those pieces? That’s what Regex (short for Regular Expressions) does for your data. It’s a powerful pattern-matching tool that helps you search, extract, and transform text with precision. With Regex now natively supported in SQL Server 2025 and Azure SQL, this capability is built directly into T-SQL — no external languages or workarounds required. What can Regex help you do? Regex can help you tackle a wide range of data challenges, including: Enhancing data quality and accuracy by validating and correcting formats like phone numbers, email addresses, zip codes, and more. Extracting valuable insights by identifying and grouping specific text patterns such as keywords, hashtags, or mentions. Transforming and standardizing data by replacing, splitting, or joining text patterns — useful for handling abbreviations, acronyms, or synonyms. Cleaning and optimizing data by removing unwanted patterns like extra whitespace, punctuation, or duplicates. Meet the new Regex functions in T-SQL SQL Server 2025 introduces seven new T-SQL Regex functions, grouped into two categories: scalar functions (return a value per row) and table-valued functions (TVFs) (return a set of rows). Here’s a quick overview: Function Type Description REGEXP_LIKE Scalar Returns TRUE if the input string matches the Regex pattern REGEXP_COUNT Scalar Counts the number of times a pattern occurs in a string REGEXP_INSTR Scalar Returns the position of a pattern match within a string REGEXP_REPLACE Scalar Replaces substrings that match a pattern with a replacement string REGEXP_SUBSTR Scalar Extracts a substring that matches a pattern REGEXP_MATCHES TVF Returns a table of all matches including substrings and their positions REGEXP_SPLIT_TO_TABLE TVF Splits a string into rows using a Regex delimiter These functions follow the POSIX standard and support most of the PCRE/PCRE2 flavor of regular expression syntax, making them compatible with most modern Regex engines and tools. They support common features like: Character classes (\d, \w, etc.) Quantifiers (+, *, {n}) Alternation (|) Capture groups ((...)) You can also use Regex flags to modify behavior: 'i' – Case-insensitive matching 'm' – Multi-line mode (^ and $ match line boundaries) 's' – Dot matches newline 'c' – Case-sensitive matching (default) Examples: Regex in Action Let’s explore how these functions solve tricky real-world data tasks that were hard to do in earlier SQL versions. REGEXP_LIKE: Data Validation — Keeping data in shape Validating formats like email addresses or phone numbers used to require multiple functions or external tools. With REGEXP_LIKE, it’s now a concise query. For example, you can check whether an email contains valid characters before and after the @, followed by a domain with at least two letters like .com, .org, or .co.in. SELECT [Name], Email, CASE WHEN REGEXP_LIKE (Email, '^[A-Za-z0-9._+]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$') THEN 'Valid Email' ELSE 'Invalid Email' END AS IsValidEmail FROM (VALUES ('John Doe', 'john@contoso.com'), ('Alice Smith', 'alice@fabrikam.com'), ('Bob Johnson', 'bob@fabrikam.net'), ('Charlie Brown', 'charlie@contoso.co.in'), ('Eve Jones', 'eve@@contoso.com')) AS e(Name, Email); We can further use REGEXP_LIKE in CHECK constraints to enforce these rules at the column level (so no invalid format ever gets into the table). For instance: CREATE TABLE Employees ( ..., Email VARCHAR (320) CHECK (REGEXP_LIKE (Email, '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$')), Phone VARCHAR (20) CHECK (REGEXP_LIKE (Phone, '^(\d{3})-(\d{3})-(\d{4})$')) ); This level of enforcement significantly enhances data integrity by ensuring that only correctly formatted values are accepted into the database. REGEXP_COUNT: Count JSON object keys Count how many top-level keys exist in a JSON string — no JSON parser needed! SELECT JsonData, REGEXP_COUNT(JsonData, '"[^"]+"\s*:', 1, 'i') AS NumKeys FROM (VALUES ('{"name":"Abhiman","role":"PM","location":"Bengaluru"}'), ('{"skills":["SQL","T-SQL","Regex"],"level":"Advanced"}'), ('{"project":{"name":"Regex GA","status":"Live"},"team":["Tejas","UC"]}'), ('{"empty":{}}'), ('{}')) AS t(JsonData); REGEXP_INSTR: Locate patterns in logs Find the position of the first error code (ERR-XXXX) in log messages — even when the pattern appears multiple times or in varying locations. SELECT LogMessage, REGEXP_INSTR(LogMessage, 'ERR-\d{4}', 1, 1, 0, 'i') AS ErrorCodePosition FROM (VALUES ('System initialized. ERR-1001 occurred during startup.'), ('Warning: Disk space low. ERR-2048. Retry failed. ERR-2049.'), ('No errors found.'), ('ERR-0001: Critical failure. ERR-0002: Recovery started.'), ('Startup complete. Monitoring active.')) AS t(LogMessage); REGEXP_REPLACE: Redact sensitive data Mask SSNs and credit card numbers in logs or exports — all with a single, secure query. SELECT sensitive_info, REGEXP_REPLACE(sensitive_info, '(\d{3}-\d{2}-\d{4}|\d{4}-\d{4}-\d{4}-\d{4})', '***-**-****') AS redacted_info FROM (VALUES ('John Doe SSN: 123-45-6789'), ('Credit Card: 9876-5432-1098-7654'), ('SSN: 000-00-0000 and Card: 1111-2222-3333-4444'), ('No sensitive info here'), ('Multiple SSNs: 111-22-3333, 222-33-4444'), ('Card: 1234-5678-9012-3456, SSN: 999-88-7777')) AS t(sensitive_info); REGEXP_SUBSTR: Extract and count email domains Extract domains from email addresses and group users by domain. SELECT REGEXP_SUBSTR(Email, '@(.+)$', 1, 1, 'i', 1) AS Domain, COUNT(*) AS NumUsers FROM (VALUES ('Alice', 'alice@contoso.com'), ('Bob', 'bob@fabrikam.co.in'), ('Charlie', 'charlie@example.com'), ('Diana', 'diana@college.edu'), ('Eve', 'eve@contoso.com'), ('Frank', 'frank@fabrikam.co.in'), ('Grace', 'grace@example.net')) AS e(Name, Email) WHERE REGEXP_LIKE (Email, '^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$') GROUP BY REGEXP_SUBSTR(Email, '@(.+)$', 1, 1, 'i', 1); REGEXP_MATCHES: Extract multiple emails from text Extract all email addresses from free-form text like comments or logs — returning each match as a separate row for easy parsing or analysis. SELECT * FROM REGEXP_MATCHES ('Contact us at support@example.com or sales@example.com', '[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}'); This query identifies and returns both email addresses found in the string — no need for loops, manual parsing, or external scripting. REGEXP_SPLIT_TO_TABLE: Break down structured text Split a string into rows using a Regex delimiter — ideal for parsing logs, config entries, or form data. SELECT * FROM REGEXP_SPLIT_TO_TABLE ('Name: John Doe; Email: john.doe@example.com; Phone: 123-456-7890', '; '); This query breaks the input string into rows for each field, making it easier to parse and process the data — especially when dealing with inconsistent or custom delimiters. To explore more examples, syntax options, and usage details, head over to the https://learn.microsoft.com/en-us/sql/t-sql/functions/regular-expressions-functions-transact-sql?view=sql-server-ver17. Conclusion The addition of Regex functionality in SQL Server 2025 and Azure SQL is a major leap forward for developers and DBAs. It eliminates the need for external libraries, CLR integration, or complex workarounds for text processing. With Regex now built into T-SQL, you can: Validate and enforce data formats Sanitize and transform sensitive data Search logs for complex patterns Extract and split structured content And this is just the beginning. Regex opens the door to a whole new level of data quality, text analytics, and developer productivity — all within the database engine. So go ahead and Regex away! Your feedback and partnership continue to drive innovation in Azure SQL and SQL Server — thank you for being part of it.97Views0likes0CommentsSecuring Azure SQL Database with Microsoft Entra Password-less Authentication: Migration Guide
The Secure Future Initiative is Microsoft’s strategic framework for embedding security into every layer of the data platform—from infrastructure to identity. As part of this initiative, Microsoft Entra authentication for Azure SQL Database offers a modern, password less approach to access control that aligns with Zero Trust principles. By leveraging Entra identities, customers benefit from stronger security postures through multifactor authentication, centralized identity governance, and seamless integration with managed identities and service principals. Onboarding Entra authentication enables organizations to reduce reliance on passwords, simplify access management, and improve auditability across hybrid and cloud environments. With broad support across tools and platforms, and growing customer adoption, Entra authentication is a forward-looking investment in secure, scalable data access. Migration Steps Overview Organizations utilizing SQL authentication can strengthen database security by migrating to Entra Id-based authentication. The following steps outline the process. Identify your logins and users – Review the existing SQL databases, along with all related users and logins, to assess what’s needed for migration. Enable Entra auth on Azure SQL logical servers by assigning a Microsoft Entra admin. Identify all permissions associated with the SQL logins & Database users. Recreate SQL logins and users with Microsoft Entra identities. Upgrade application drivers and libraries to min versions & update application connections to SQL Databases to use Entra based managed identities. Update deployments for SQL logical server resources to have Microsoft Entra-only authentication enabled. For all existing Azure SQL Databases, flip to Entra‑only after validation. Enforce Entra-only for all Azure SQL Databases with Azure Policies (deny). Step 1: Identify your logins and users - Use SQL Auditing Consider using SQL Audit to monitor which identities are accessing your databases. Alternatively, you may use other methods or skip this step if you already have full visibility of all your logins. Configure server‑level SQL Auditing. For more information on turning the server level auditing: Configure Auditing for Azure SQL Database series - part1 | Microsoft Community Hub SQL Audit can be enabled on the logical server, which will enable auditing for all existing and new user databases. When you set up auditing, the audit log will be written to your storage account with the SQL Database audit log format. Use sys.fn_get_audit_file_v2 to query the audit logs in SQL. You can join the audit data with sys.server_principals and sys.database_principals to view users and logins connecting to your databases. The following query is an example of how to do this: SELECT (CASE WHEN database_principal_id > 0 THEN dp.type_desc ELSE NULL END) AS db_user_type , (CASE WHEN server_principal_id > 0 THEN sp.type_desc ELSE NULL END) AS srv_login_type , server_principal_name , server_principal_sid , server_principal_id , database_principal_name , database_principal_id , database_name , SUM(CASE WHEN succeeded = 1 THEN 1 ELSE 0 END) AS sucessful_logins , SUM(CASE WHEN succeeded = 0 THEN 1 ELSE 0 END) AS failed_logins FROM sys.fn_get_audit_file_v2( '<Storage_endpoint>/<Container>/<ServerName>', DEFAULT, DEFAULT, '2023-11-17T08:40:40Z', '2023-11-17T09:10:40Z') -- join on database principals (users) metadata LEFT OUTER JOIN sys.database_principals dp ON database_principal_id = dp.principal_id -- join on server principals (logins) metadata LEFT OUTER JOIN sys.server_principals sp ON server_principal_id = sp.principal_id -- filter to actions DBAF (Database Authentication Failed) and DBAS (Database Authentication Succeeded) WHERE (action_id = 'DBAF' OR action_id = 'DBAS') GROUP BY server_principal_name , server_principal_sid , server_principal_id , database_principal_name , database_principal_id , database_name , dp.type_desc , sp.type_desc Step 2: Enable Microsoft Entra authentication (assign admin) Follow this to enable Entra authentication and assign a Microsoft Entra admin at the server. This is mixed mode; existing SQL auth continues to work. WARNING: Do NOT enable Entra‑only (azureADOnlyAuthentications) yet. That comes in Step 7. Entra admin Recommendation: For production environments, it is advisable to utilize an PIM Enabled Entra group as the server administrator for enhanced access control. Step 3: Identity & document existing permissions (SQL Logins & Users) Retrieve a list of all your SQL auth logins. Make sure to run on the master database.: SELECT * FROM sys.sql_logins List all SQL auth users, run the below query on all user Databases. This would list the users per Database. SELECT * FROM sys.database_principals WHERE TYPE = 'S' Note: You may need only the column ‘name’ to identify the users. List permissions per SQL auth user: SELECT database_principals.name , database_principals.principal_id , database_principals.type_desc , database_permissions.permission_name , CASE WHEN class = 0 THEN 'DATABASE' WHEN class = 3 THEN 'SCHEMA: ' + SCHEMA_NAME(major_id) WHEN class = 4 THEN 'Database Principal: ' + USER_NAME(major_id) ELSE OBJECT_SCHEMA_NAME(database_permissions.major_id) + '.' + OBJECT_NAME(database_permissions.major_id) END AS object_name , columns.name AS column_name , database_permissions.state_desc AS permission_type FROM sys.database_principals AS database_principals INNER JOIN sys.database_permissions AS database_permissions ON database_principals.principal_id = database_permissions.grantee_principal_id LEFT JOIN sys.columns AS columns ON database_permissions.major_id = columns.object_id AND database_permissions.minor_id = columns.column_id WHERE type_desc = 'SQL_USER' ORDER BY database_principals.name Step 4: Create SQL users for your Microsoft Entra identities You can create users(preferred) for all Entra identities. Learn more on Create user The "FROM EXTERNAL PROVIDER" clause in TSQL distinguishes Entra users from SQL authentication users. The most straightforward approach to adding Entra users is to use a managed identity for Azure SQL and grant the required three Graph API permissions. These permissions are necessary for Azure SQL to validate Entra users. User.Read.All: Allows access to Microsoft Entra user information. GroupMember.Read.All: Allows access to Microsoft Entra group information. Application.Read.ALL: Allows access to Microsoft Entra service principal (application) information. For creating Entra users with non-unique display names, use Object_Id in the Create User TSQL: -- Retrieve the Object Id from the Entra blade from the Azure portal. CREATE USER [myapp4466e] FROM EXTERNAL PROVIDER WITH OBJECT_ID = 'aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb' For more information on finding the Entra Object ID: Find tenant ID, domain name, user object ID - Partner Center | Microsoft Learn Alternatively, if granting these API permissions to SQL is undesirable, you may add Entra users directly using the T-SQL commands provided below. In these scenarios, Azure SQL will bypass Entra user validation. Create SQL user for managed identity or an application - This T-SQL code snippet establishes a SQL user for an application or managed identity. Please substitute the `MSIname` and `clientId` (note: use the client id, not the object id), variables with the Display Name and Client ID of your managed identity or application. -- Replace the two variables with the managed identity display name and client ID declare @MSIname sysname = '<Managed Identity/App Display Name>' declare @clientId uniqueidentifier = '<Managed Identity/App Client ID>'; -- convert the guid to the right type and create the SQL user declare @castClientId nvarchar(max) = CONVERT(varchar(max), convert (varbinary(16), @clientId), 1); -- Construct command: CREATE USER [@MSIname] WITH SID = @castClientId, TYPE = E; declare nvarchar(max) = N'CREATE USER [' + @MSIname + '] WITH SID = ' + @castClientId + ', TYPE = E;' EXEC (@cmd) For more information on finding the Entra Client ID: Register a client application in Microsoft Entra ID for the Azure Health Data Services | Microsoft Learn Create SQL user for Microsoft Entra user - Use this T-SQL to create a SQL user for a Microsoft Entra account. Enter your username and object Id: -- Replace the two variables with the MS Entra user alias and object ID declare sysname = '<MS Entra user alias>'; -- (e.g., username@contoso.com) declare uniqueidentifier = '<User Object ID>'; -- convert the guid to the right type declare @castObjectId nvarchar(max) = CONVERT(varchar(max), convert (varbinary(16), ), 1); -- Construct command: CREATE USER [@username] WITH SID = @castObjectId, TYPE = E; declare nvarchar(max) = N'CREATE USER [' + + '] WITH SID = ' + @castObjectId + ', TYPE = E;' EXEC (@cmd) Create SQL user for Microsoft Entra group - This T-SQL snippet creates a SQL user for a Microsoft Entra group. Set groupName and object Id to your values. -- Replace the two variables with the MS Entra group display name and object ID declare @groupName sysname = '<MS Entra group display name>'; -- (e.g., ContosoUsersGroup) declare uniqueidentifier = '<Group Object ID>'; -- convert the guid to the right type and create the SQL user declare @castObjectId nvarchar(max) = CONVERT(varchar(max), convert (varbinary(16), ), 1); -- Construct command: CREATE USER [@groupName] WITH SID = @castObjectId, TYPE = X; declare nvarchar(max) = N'CREATE USER [' + @groupName + '] WITH SID = ' + @castObjectId + ', TYPE = X;' EXEC (@cmd) For more information on finding the Entra Object ID: Find tenant ID, domain name, user object ID - Partner Center | Microsoft Learn Validate SQL user creation - When a user is created correctly, the EntraID column in this query shows the user's original MS Entra ID. select CAST(sid as uniqueidentifier) AS EntraID, * from sys.database_principals Assign permissions to Entra based users – After creating Entra users, assign them SQL permissions to read or write by either using GRANT statements or adding them to roles like db_datareader. Refer to your documentation from Step 3, ensuring you include all necessary user permissions for new Entra SQL users and that security policies remain enforced. Step 5: Update Programmatic Connections Change your application connection strings to managed identities for SQL authentication and test each app for Microsoft Entra compatibility. Upgrade your drivers to these versions or newer. JDBC driver version 7.2.0 (Java) ODBC driver version 17.3 (C/C++, COBOL, Perl, PHP, Python) OLE DB driver version 18.3.0 (COM-based applications) Microsoft.Data.SqlClient 5.2.2+ (ADO.NET) Microsoft.EntityFramework.SqlServer 6.5.0 (Entity Framework) System.Data.SqlClient(SDS) doesn't support managed identity; switch to Microsoft.Data.SqlClient(MDS). If you need to port your applications from SDS to MDS the following cheat sheet will be helpful: https://github.com/dotnet/SqlClient/blob/main/porting-cheat-sheet.md. Microsoft.Data.SqlClient also takes a dependency on these packages & most notably the MSAL for .NET (Version 4.56.0+). Here is an example of Azure web application connecting to Azure SQL, using managed identity. Step 6: Validate No Local Auth Traffic Be sure to switch all your connections to managed identity before you redeploy your Azure SQL logical servers with Microsoft Entra-only authentication turned on. Repeat the use of SQL Audit, just as you did in Step 1, but now to confirm that every connection has moved away from SQL authentication. Once your server is up and running with only Entra authentication, any connections still based on SQL authentication will not work, which could disrupt services. Test your systems thoroughly to verify that everything operates correctly. Step 7: Enable Microsoft Entra‑only & disable local auth Once all your connections & applications are built to use managed identity, you can disable the SQL Authentication, by turning the Entra-only authentication via Azure portal, or using the APIs. Step 8: Enforce at scale (Azure Policy) Additionally, after successful migration and validation, it is recommended to deploy the built-in Azure Policy across your subscriptions to ensure that all SQL resources do not use local authentication. During resource creation, Azure SQL instances will be required to have Microsoft Entra-only authentication enabled. This requirement can be enforced through Azure policies. Best Practices for Entra-Enabled Azure SQL Applications Use exponential backoff with decorrelated jitter for retrying transient SQL errors, and set a max retry cap to avoid resource drain. Separate retry logic for connection setup and query execution. Cache and proactively refresh Entra tokens before expiration. Use Microsoft.Data.SqlClient v3.0+ with Azure.Identity for secure token management. Enable connection pooling and use consistent connection strings. Set appropriate timeouts to prevent hanging operations. Handle token/auth failures with targeted remediation, not blanket retries. Apply least-privilege identity principles; avoid global/shared tokens. Monitor retry counts, failures, and token refreshes via telemetry. Maintain auditing for compliance and security. Enforce TLS 1.2+ (Encrypt=True, TrustServerCertificate=False). Prefer pooled over static connections. Log SQL exception codes for precise error handling. Keep libraries and drivers up to date for latest features and resilience. References Use this resource to troubleshoot issues with Entra authentication (previously known as Azure AD Authentication): Troubleshooting problems related to Azure AD authentication with Azure SQL DB and DW | Microsoft Community Hub To add Entra users from an external tenant, invite them as guest users to the Azure SQL Database's Entra administrator tenant. For more information on adding Entra guest users: Quickstart: Add a guest user and send an invitation - Microsoft Entra External ID | Microsoft Learn Conclusion Migrating to Microsoft Entra password-less authentication for Azure SQL Database is a strategic investment in security, compliance, and operational efficiency. By following this guide and adopting best practices, organizations can reduce risk, improve resilience, and future-proof their data platform in alignment with Microsoft’s Secure Future Initiative.269Views0likes1CommentOllama on HTTPS for SQL Server
Here is a quick procedure to deploy an Ubuntu container with Ollama and expose its API over HTTPS. The goal is to allow a fast deployment, even for those unfamiliar with Docker or Language Models, making it easy to set up an offline platform for generating embeddings and using Small Language Models This is particularly useful when testing SQL Server 2025 for fully on-premises environment use cases, since SQL Server only allows access to HTTPS endpoints. However, HTTP remains open for testing purposes. Please note that this example is CPU-based, as deploying with (integrated) GPU support involves additional, less straightforward steps. This example is provided solely to illustrate the concept, is not intended for production use, and comes without any guarantee of performance or security. Prerequisites To continue, you need to have Docker Desktop, WSL and SQL Server 2025 (currently Release Candidate 1) Docker Desktop Install WSL | Microsoft Learn SQL Server 2025 Preview | Microsoft Evaluation Center Create a Dockerfile First, create a working directory. In this example, C:\Docker\Ollama will be used. Simply create a file named Dockerfile (without an extension) and paste the following content into it. FROM ubuntu:25.10 RUN apt update && apt install -y curl gnupg2 ca-certificates lsb-release apt-transport-https software-properties-common unzip nano openssl net-tools RUN curl -fsSL https://ollama.com/install.sh | bash RUN curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg RUN curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | tee /etc/apt/sources.list.d/caddy-stable.list RUN apt update && apt install -y caddy RUN mkdir -p /etc/caddy/certs RUN cat > /etc/caddy/certs/san.cnf <<EOF [req] default_bits = 2048 prompt = no default_md = sha256 req_extensions = req_ext distinguished_name = dn [dn] CN = 127.0.0.1 [req_ext] subjectAltName = @alt_names [alt_names] IP.1 = 127.0.0.1 DNS.1 = localhost EOF RUN openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/caddy/certs/localhost.key -out /etc/caddy/certs/localhost.crt -config /etc/caddy/certs/san.cnf -extensions req_ext RUN echo "https://:443 {\n tls /etc/caddy/certs/localhost.crt /etc/caddy/certs/localhost.key\n reverse_proxy localhost:11434\n}" >> /etc/caddy/Caddyfile RUN echo "#!/bin/bash" > /usr/local/bin/entrypoint.sh && \ echo "set -e" >> /usr/local/bin/entrypoint.sh && \ echo "OLLAMA_HOST=0.0.0.0 ollama serve >> /var/log/ollama.log 2>&1 &" >> /usr/local/bin/entrypoint.sh && \ echo "caddy run --config /etc/caddy/Caddyfile --adapter caddyfile >> /var/log/caddy.log 2>&1 &" >> /usr/local/bin/entrypoint.sh && \ echo "tail -f /var/log/ollama.log /var/log/caddy.log" >> /usr/local/bin/entrypoint.sh && \ chmod 755 /usr/local/bin/entrypoint.sh ENTRYPOINT ["/usr/local/bin/entrypoint.sh"] For your information, this file allows the creation of an image based on Ubuntu 25.10 and includes: Ollama, for running the models Caddy, for the reverse proxy Creation of a certificate for the HTTPS endpoint on localhost Create the container After opening a Powershell terminal, execute the following commands: cd C:\Docker\Ollama #Build the image from the Dockerfile. docker build -t ollama-https . #Create a container based on the image ollama-https docker run --name ollama-https -d -it -p 443:443 -p 11434:11434 ollama-https #Copy the certificate created into the current Windows directory docker cp ollama-https:/etc/caddy/certs/localhost.crt . # Install the certificate in Trusted Root Certification Authorities Import-Certificate -FilePath "localhost.crt" -CertStoreLocation "Cert:\LocalMachine\Root" #Check Https (wget https://localhost).Content #Check Http (wget http://localhost:11434).Content Ollama is now running With a browser, connect to https://localhost Retrieve Models No model is retrieved when the image is created, as this depends on each use case, and for some models, the size can be substantial. Here’s a quick example for pulling an embedding model, Nomic, and a small language model, Phi3. Ollama Search docker exec ollama-https ollama pull nomic-embed-text docker exec ollama-https ollama pull phi3:mini A quick example with SQL Server 2025 A quick demonstration using the WideWorldImporters database (Wide World Importers sample database) use [master] GO ALTER DATABASE WideWorldImporters SET COMPATIBILITY_LEVEL = 170 WITH ROLLBACK IMMEDIATE GO DBCC TRACEON(466, 474, 13981, -1) GO Note: With RC1, you can use the PREVIEW_FEATURES database-scoped configuration T-SQL Declare an external model for embeddings. use [WideWorldImporters] GO CREATE EXTERNAL MODEL NomicLocal AUTHORIZATION dbo WITH ( LOCATION = 'https://localhost/api/embed', API_FORMAT = 'ollama', MODEL_TYPE = EMBEDDINGS, MODEL = 'nomic-embed-text' ) to enable semantic search capabilities on StockItems, we will create a dedicated table to store embeddings (no chunking in this example) along with a vector index optimized for cosine similarity use [WideWorldImporters] GO CREATE TABLE [Warehouse].[StockItemsEmbedding](StockItemEmbeddingID int identity (1,1) PRIMARY KEY, StockItemId int, SearchDetails nvarchar(max), Embedding vector(768)) GO INSERT INTO [Warehouse].[StockItemsEmbedding] SELECT si.StockItemID, si.SearchDetails, AI_GENERATE_EMBEDDINGS(si.SearchDetails USE MODEL NomicLocal) /* Generate embeddings from declared external model */ FROM [Warehouse].[StockItems] si GO /* Check */ SELECT * FROM [Warehouse].[StockItemsEmbedding] GO CREATE VECTOR INDEX IXV_1 ON [Warehouse].[StockItemsEmbedding] (Embedding) WITH (METRIC = 'cosine', TYPE = 'DiskANN') GO /* User Input */ DECLARE @UserInput varchar(max) = 'Which product is best suited for shipping small items?' /* and Generate embeddings for user input */ DECLARE @UserInputV vector(768) = AI_GENERATE_EMBEDDINGS(@UserInput USE MODEL NomicLocal) DECLARE @ModelInput nvarchar(max) DECLARE Payload nvarchar(max) DECLARE Response nvarchar(max) /* Similarity Search on StockItems and Model Input creation*/ SELECT @ModelInput = STRING_AGG('ProductDetails: ' + sie.SearchDetails + 'UnitPrice: ' + CAST(si.UnitPrice AS nvarchar(max)), ' \n\n') FROM VECTOR_SEARCH( TABLE = [Warehouse].[StockItemsEmbedding] as sie, COLUMN = Embedding, SIMILAR_TO = @UserInputV, METRIC = 'cosine', TOP_N = 10 ) JOIN [Warehouse].[StockItems] si ON si.StockItemId = sie.StockItemId /* Generate payload for response generation */ SELECT = '{"model": "phi3:mini", "stream": false, "prompt":"You are acting as a customer advisor responsible for recommending the most suitable products based on customer needs, providing clear and personalized suggestions. Question : ' + @UserInput + '\n\nList of Items : ' + @ModelInput + '"}'; EXECUTE sp_invoke_external_rest_endpoint @url = 'https://localhost/api/generate', @method = 'POST', = , @timeout = 230, = OUTPUT; PRINT JSON_VALUE(@response, '$.result.response') LangChain You can also have a try with LangChain. Same demo with a small difference, there is no vector index created on the vector store table. The table has been modified, but only for demonstration purposes. Reference: SQLServer | 🦜️🔗 LangChain # PREREQ #sudo apt-get update && sudo apt-get install -y unixodbc # sudo apt-get update # sudo apt-get install -y curl gnupg2 # curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add - # curl https://packages.microsoft.com/config/debian/11/prod.list | sudo tee /etc/apt/sources.list.d/mssql-release.list # sudo apt-get update # sudo ACCEPT_EULA=Y apt-get install -y msodbcsql18 # pip3 install langchain langchain-sqlserver langchain-ollama langchain-community import pyodbc from langchain_sqlserver import SQLServer_VectorStore from langchain_ollama import OllamaEmbeddings from langchain_ollama import ChatOllama from langchain.schema import Document from langchain_community.vectorstores.utils import DistanceStrategy #Prompt for testing _USER_INPUT = 'Which product is best suited for shipping small items?' ############### Params ########################################## print("\033[93mSetting up variables...\033[0m") _SQL_DRIVER = "ODBC Driver 18 for SQL Server" _SQL_SERVER = "localhost\\SQL2K25" _SQL_DATABASE = "WideWorldImporters" _SQL_USERNAME = "lc" _SQL_PASSWORD = "lc" _SQL_TRUST_CERT = "yes" _SQL_VECTOR_STORE_TABLE = "StockItem_VectorStore" # Table name for vector storage _MODIFY_TABLE_TO_USE_SQL_VECTOR_INDEX = True #As vector index not considered currently in langchain and structure does not match vector index requirements _CONNECTION_STRING = f"Driver={{{_SQL_DRIVER}}};Server={_SQL_SERVER};Database={_SQL_DATABASE};UID={_SQL_USERNAME};PWD={_SQL_PASSWORD};TrustServerCertificate={_SQL_TRUST_CERT}" _OLLAMA_API_URL = "https://localhost" _OLLAMA_EMBEDDING_MODEL = "nomic-embed-text:latest" _OLLAMA_EMBEDDING_VECTOR_SIZE = 768 _OLLAMA_SLM_MODEL = "phi3:mini" # Model for SLM queries ################################################################### #Define Ollama embeddings embeddings = OllamaEmbeddings( model=_OLLAMA_EMBEDDING_MODEL, base_url=_OLLAMA_API_URL ) conn = pyodbc.connect(_CONNECTION_STRING) cursor = conn.cursor() #Drop embeddings table if it exists print("\033[93mDropping existing vector store table if it exists...\033[0m") cursor.execute(f"DROP TABLE IF EXISTS Warehouse.{_SQL_VECTOR_STORE_TABLE};") print("\033[93mConnecting to SQL Server and fetching data...\033[0m") cursor.execute("SELECT StockItemId, SearchDetails, UnitPrice FROM Warehouse.StockItems;") rows = cursor.fetchall() print(f"\033[93mFound {len(rows)} records to process\033[0m") # Create documents from the fetched data documents = [ Document( page_content=row.SearchDetails, metadata={ "StockItemId": row.StockItemId, "UnitPrice": float(row.UnitPrice) # Convert Decimal to float } ) for row in rows ] conn.commit() #Creating vector store print("\033[93mCreating vector store...\033[0m") vector_store = SQLServer_VectorStore( connection_string=_CONNECTION_STRING, distance_strategy=DistanceStrategy.COSINE, # If not provided, defaults to COSINE embedding_function=embeddings, embedding_length=_OLLAMA_EMBEDDING_VECTOR_SIZE, db_schema = "Warehouse", table_name=_SQL_VECTOR_STORE_TABLE ) print("\033[93mAdding to vector store...\033[0m") try: vector_store.add_documents(documents) print("\033[93mSuccessfully added to vector store!\033[0m") except Exception as e: print(f"\033[91mError adding documents: {e}\033[0m") #Vector index not yet integrated in SQL Server VectorStore (drop auto-created nonclustered PK and generating int clustered PK if (_MODIFY_TABLE_TO_USE_SQL_VECTOR_INDEX): print("\033[93mModifying structure to create vector index...\033[0m") cursor.execute("DECLARE @AutoCreatedPK sysname, @SQL nvarchar(max);" f"SELECT @AutoCreatedPK = name FROM sys.key_constraints WHERE type = 'PK' AND parent_object_id = object_id('Warehouse.{_SQL_VECTOR_STORE_TABLE}');" f"SELECT @SQL = 'ALTER TABLE Warehouse.{_SQL_VECTOR_STORE_TABLE} DROP CONSTRAINT ' + @AutoCreatedPK + ';'" "EXEC sp_executesql @SQL;" f"ALTER TABLE Warehouse.{_SQL_VECTOR_STORE_TABLE} ADD Alt_Id int identity(1,1);" f"ALTER TABLE Warehouse.{_SQL_VECTOR_STORE_TABLE} ADD CONSTRAINT PK_{_SQL_VECTOR_STORE_TABLE} PRIMARY KEY (Alt_Id);") conn.commit() print("\033[93mCreating vector index...\033[0m") cursor.execute(f"CREATE VECTOR INDEX IV_{_SQL_VECTOR_STORE_TABLE} ON [Warehouse].[{_SQL_VECTOR_STORE_TABLE}] (embeddings) WITH (METRIC = 'cosine', TYPE = 'DiskANN');") conn.commit() #Generate prompt then answer print(f"\033[92mUser Input: {_USER_INPUT}\033[0m") context = [ { "Item": doc.page_content, "UnitPrice": doc.metadata.get("UnitPrice", None) } for doc in vector_store.similarity_search(_USER_INPUT, k=3) ] llm = ChatOllama(model=_OLLAMA_SLM_MODEL,base_url=_OLLAMA_API_URL) prompt = ( f"You are acting as a customer advisor responsible for recommending the most suitable products based on customer needs, providing clear and personalized suggestions" f"Context: {context}\n\nQuestion: {_USER_INPUT}\n\n") response = llm.invoke(prompt) print(f"\033[36m{response.content}\033[0m") Note : If using devcontainer with VSCode add "runArgs": [ "--network=host" ] to devcontainer.json to allow connections to “localhost”. Import and install the previously created certificat docker cp C:\Docker\Ollama\localhost.crt <devcontainer name>:/usr/local/share/ca-certificates/localhost.crt docker exec <devcontainer name> "update-ca-certificates" Disclaimer The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.SQL Server - Always On High Availability Group Setup
We have an Always On High Availability Group setup with three database servers — two on the same subnet and one on a different subnet. The application server interacts with these database servers through the AG listener. Currently, this setup is running with application version X, and we have received a new application version Y for upgrade/migration (which includes database changes). To perform the application update, please clarify the recommended approach: Should I remove the database from the Availability Group and point to the primary replica to continue with the application update? or Can I proceed with the application update while the Always On configuration remains active, pointing to the AG listener?65Views0likes1CommentWhy is SQL Server only storing 4000 characters in an NVARCHAR(MAX) column?
Hi Guys, I'm trying to insert a string with 10,000 plain characters (just repeated 'A's) into a column defined as NVARCHAR(MAX) in SQL Server. But LEN(Content) always returns 4000, not 10,000. I’ve verified that the column is NVARCHAR(MAX) and used the N prefix for Unicode. Still, the data seems to be truncated. What could be causing this? Is there something I'm missing in how SQL Server handles large strings? Tried this: CREATE TABLE LargeTextExample ( Id INT PRIMARY KEY IDENTITY(1,1), Content NVARCHAR(MAX) ); DECLARE @LongText NVARCHAR(MAX); SET @LongText = REPLICATE(N'A', 10000); INSERT INTO LargeTextExample (Content) VALUES (@LongText); SELECT LEN(Content) AS CharacterCount FROM LargeTextExample; Thanks, TusharSolved126Views0likes2CommentsCreate PowerApps Application with On-premises SQL Server
PowerApps is a one of the powerful low code applications from Microsoft, which helps to develop millions of applications within a minute. Only platform which allows technical and non-technical to implement application with their requirements. Power platforms can be developed to receive data from different data sources like Excel, SQL server, share point resources and data from different cloud storages like Google, OneDrive, etc. Considering OnPrem data sources we do traditional coding to connect the application with the data source. But here we have a possibility to develop a PowerApps and receive data from On Prem SQL server. In this article I will show you how to connect OnPrem SQL to PowerApps. Pre-Requites 1) OnPrem SQL Server 2) PowerApps Subscription 3) On-Premises Data Gateway 4) Internet Connection On-Premises Data gateway for Power Apps On-Premises gateway is a bridge which provides most secure communication in between local sources to cloud applications. To know and learn more about On-Premises data gate way: https://learn.microsoft.com/en-us/power-apps/maker/canvas-apps/gateway-reference SQL Server on Windows 10 Machine I have SQL database installed on my Windows 10 machine, which is a local machine with the active internet connection. If you are new to SQL server installation and configure, I have shared some other posts which will show you how to install and configure SQL server 2022 in your test environment. Download SQL Server 2022: https://info.microsoft.com/ww-landing-sql-server-2022.html?culture=en-us&country=us To Confirm your Installed Version: SELECT @@VERSION I have created a table named Demo Data and inserted some sample data Let’s configure the On-Premises Data Gateway To Download the On-Premises Data Gateway for PowerApps please download the Gateway from below link — https://powerapps.microsoft.com/en-us/downloads/. Once it’s downloaded install it to the local machine, it will take some time to install. When installation succeed it will prompt a dialog box to enter the Microsoft account details. Use the same account which you used to create PowerApps. In the next step create a Gateway with the valid security and recovery key. Here I named my gateway as DEMOPOWERAPPS Create Configure button to apply the configurations and close the dialog box, now it’s done. Now it’s time to build the app in PowerApps Go to PowerApps portal to make the app — make.powerapps.com I am going to create the PowerApps with SQL data source Click the SQL and add the New Connection and select the connection source as SQL database (SQL Server). In the authentication type select SQL Server Authentication from the drop-down list. In the source selection pane select Connect using on-premises data gateway as data source for SQL server SQL Server Name — Windows 10 Local PC Name SQL Database Name- POWERAPPSDEMO Username- demouser Password-[Your SQL User Password Here] In the bottom you need to select the installed gateway, here it will show the gateways you installed with the Microsoft account which you used to create the PowerApps. Here my installed DEMOPOWERAPPS gateway is listed and it’s showing, if your gateway is not listed, please refresh the gateway list. Once you select the correct gateway click create to link the data source to PowerApps. It will show the available tables once the gateway connection is succeeded, here my table DemoData is showing. I am going to use the DemoData table to connect with PowerApps. In case in the same database if you want to change or create new table you can either create it from here or choose different one. Click connect to link the On Prem database with PowerApps, It’s done and app is automatically created now. Without doing any additional changes in the app I will see the preview of the app to check to see whether it’s communicating correctly with the SQL database available at my laptop. The data which I entered manually using SQL commands are appearing here, it means it’s perfectly connected with the on prem SQL database. By adding an edit form in the PowerApps you can be able to insert the data from PowerApps to On Prem SQL Server. Just try this by yourself and let me know if you have any issues in connect On Prem SQL server to Power apps. Happy Coding!!