sqlserver
54 TopicsCreating a Contained Availability Group and Enabling Database Creation via CAG Listener
A Contained Availability Group (CAG) is designed to simplify high availability and disaster recovery by encapsulating system databases (master, msdb) within the availability group itself. This means that logins, jobs, credentials, and other metadata are automatically replicated across replicas, eliminating the need for manual synchronization and reducing operational complexity. Starting with SQL Server 2025 CU1, you can create or restore databases directly through the CAG listener - without connecting to the physical instance - by enabling a session context key. Why Contained AGs Matter Self-contained HA unit: Each CAG has its own copy of master and msdb, making it independent of the physical SQL instance. Simplified failover: When the AG fails over, all associated metadata moves with it, ensuring applications continue to function without manual intervention. Improved automation: Supports scenarios where direct access to the SQL instance is restricted, enabling operations through the AG listener. Enhanced security: Reduces exposure by limiting instance-level access; operations can be scoped to the AG context. Streamlined management: Eliminates the need for login and job replication scripts across replicas. Step 1: Prepare the Database for Availability Group This example uses SQL Server on Linux; however, the steps are identical for SQL Server on Windows. In this walkthrough, an existing database named CAGDB is added to a Contained Availability Group (CAG). Before adding the database to the CAG, verify that it is configured for FULL recovery mode and perform a full database backup. ALTER DATABASE CAGDB SET RECOVERY FULL; GO BACKUP DATABASE CAGDB TO DISK = N'/var/opt/mssql/backups/CAGDB.bak' WITH INIT, COMPRESSION; GO Note: On Linux, ensure that the backup target directory exists and is owned by the mssql user. sudo mkdir -p /var/opt/mssql/backups sudo chown mssql:mssql /var/opt/mssql/backups Step 2: Create the Contained Availability Group On your Linux SQL Server nodes, run: CREATE AVAILABILITY GROUP [CAGDemo] WITH ( CLUSTER_TYPE = EXTERNAL, CONTAINED ) FOR DATABASE [CAGDB] REPLICA ON N'node1' WITH ( ENDPOINT_URL = N'tcp://node1:5022', AVAILABILITY_MODE = SYNCHRONOUS_COMMIT, FAILOVER_MODE = EXTERNAL, SEEDING_MODE = AUTOMATIC ), N'node2' WITH ( ENDPOINT_URL = N'tcp://node2:5022', AVAILABILITY_MODE = SYNCHRONOUS_COMMIT, FAILOVER_MODE = EXTERNAL, SEEDING_MODE = AUTOMATIC ); GO #Connect to secondary replicas and join the AG: ALTER AVAILABILITY GROUP [CAGDemo] JOIN WITH (CLUSTER_TYPE = EXTERNAL); ALTER AVAILABILITY GROUP [CAGDemo] GRANT CREATE ANY DATABASE; Step 3: Configure Listener and Connect Create a listener and connect using SSMS or sqlcmd: ALTER AVAILABILITY GROUP [CAGDemo] ADD LISTENER N'CAGDemoListener' ( WITH IP ( (N'*.*.*.*', N'255.255.255.0') ), PORT = 1453 ); GO Step 4: Connect to CAGDemoListener and attempt Database Creation (Failure) When connected through the listener: CREATE DATABASE TestCAGDB; Result:\ Msg 262, Level 14, State 1: CREATE DATABASE is not allowed in this context.\ This happens because database creation is blocked by default in a contained AG session. Step 5: Enable Database Creation in CAG Session or Instance master EXEC sp_set_session_context = N'allow_cag_create_db', @value = 1; This enables database creation for your current session. Step 6: Retry Database Creation (Success) CREATE DATABASE TestCAGDB; Result: Database created successfully within the contained AG context. Users with dbcreator role in the CAG context can perform this action. Step 7: Backup the database. ALTER DATABASE TestCAGDB SET RECOVERY FULL; BACKUP DATABASE TestCAGDB TO DISK = N'/var/opt/mssql/data/backups/TestCAGDB.bak'; Step 8: Add database to the CAG ALTER AVAILABILITY GROUP [CAGDemo] ADD DATABASE [TestCAGDB]; Optional: To automate Steps 4 - 8, you can create a stored procedure (for example: [dbo].[sp_cag_create_db]) and execute it while connected through the Contained Availability Group (CAG) listener context. CREATE OR ALTER PROCEDURE [dbo].[sp_cag_create_db] @database_name sysname, @createdb_sql NVARCHAR(MAX) = NULL AS BEGIN SET NOCOUNT ON DECLARE @fIsContainedAGSession int EXECUTE @fIsContainedAGSession = sys.sp_MSIsContainedAGSession if (@fIsContainedAGSession = 1) BEGIN DECLARE @SQL NVARCHAR(MAX); EXEC sp_set_session_context key = N'allow_cag_create_db', @value = 1; IF @createdb_sql IS NULL SET @SQL = 'CREATE DATABASE ' + QUOTENAME(@database_name); ELSE SET @SQL = @createdb_sql; PRINT @SQL EXEC sp_executesql @SQL; SET @SQL = 'ALTER DATABASE ' + QUOTENAME(@database_name) + ' SET RECOVERY FULL'; PRINT @SQL EXEC sp_executesql @SQL; SET @SQL = 'BACKUP DATABASE ' + QUOTENAME(@database_name) + ' TO DISK = N''NUL'''; PRINT @SQL EXEC sp_executesql @SQL; DECLARE AG_Name sysname; set AG_Name = (SELECT name FROM sys.availability_groups ags INNER JOIN sys.dm_exec_sessions des ON ags.group_id = des.contained_availability_group_id WHERE @@SPID = des.session_id); SET @SQL = 'use master; ALTER AVAILABILITY GROUP ' + QUOTENAME(@AG_Name) + ' add DATABASE ' + QUOTENAME (@database_name) PRINT @SQL EXEC sp_executesql @SQL; EXEC sp_set_session_context key = N'allow_cag_create_db', @value = 0; END ELSE BEGIN RAISERROR('This can only be used with a contained availability group connection.', 16, 1); END END GO At a high level, this procedure simplifies database onboarding into a CAG by orchestrating the following actions as a single workflow: Creating (or restoring) the database Setting the recovery model to FULL Taking the initial backup required for availability group seeding Adding the database to the target Contained Availability Group Example: Create a database Execute the following command to create NewTestDB and add it to the target CAG. EXEC [dbo].sp_cag_create_db @database_name = [NewTestDB]; Example: Restore a database The stored procedure also supports an optional parameter, @createdb_sql, which allows you to provide a custom SQL statement to create or restore a database (for example, restoring from a backup). Once the database backup file exists and is accessible to SQL Server, you can use this parameter to perform the operation. Important: When @createdb_sql is used, the procedure executes that SQL statement directly. Ensure the SQL statement is from a secured and trusted source. DECLARE @restoreSql NVARCHAR(MAX); SET @restoreSql = N' RESTORE DATABASE AdventureWorks2022 FROM DISK = ''/var/opt/mssql/backups/AdventureWorks2022.bak'' WITH MOVE ''AdventureWorks2022'' TO ''/var/opt/mssql/data/AdventureWorks2022.mdf'', MOVE ''AdventureWorks2022_log'' TO ''/var/opt/mssql/data/AdventureWorks2022_log.ldf'', RECOVERY; '; EXEC dbo.sp_cag_create_db @database_name = N'AdventureWorks2022', @createdb_sql = @restoreSql; Conclusion Contained Availability Groups represent a major step forward in simplifying high availability for SQL Server. By encapsulating system databases within the AG context, they eliminate the complexity of synchronizing logins, jobs, and credentials across replicas. With the new capability in SQL Server 2025 CU1 to create or restore databases directly through the AG listener using sp_set_session_context, organizations can streamline automation and reduce operational overhead. References What Is a Contained Availability Group Deploy a Pacemaker Cluster for SQL Server on Linux609Views1like0CommentsWriting a great session abstract for FabCon & SQLCon
Important Dates FabCon/SQLCon Europe in Barcelona, runs 2026 September 28 - Oct 1, 2026 Workshop Call for Content open Feb 17, 2026 to March 23, 2026 Breakout Session Call for Content open Feb 17, 2026 to April 17, 2026 When submitting a session to a conference, consider the following: Title: The title should answer the attendee's question what's in it for me? Why should I attend this session? Is it going to make me better at my job? Will it save my company money? Will it make my reports more organized or my database faster, more secure or modern? The title needs to make sense; it needs to inform. It doesn't need to be funny or contain a dad joke. It CAN, but that's secondary. It shouldn't just be the name of a product or even "Learn [product name]" because you can't teach me everything about it in 60 minutes or less. Abstract: The abstract should contain 3 things: It should define the problem you're trying to solve. It should introduce the solution. It should briefly describe what they'll learn about the solution. In the time allotted. The problem can be that the attendee doesn’t know how to create great visuals, performance tune, connect to Lakehouse, etc. If so, consider titles like “Creating Great Visuals Using...”, “Cutting Costs by Optimizing Your...”, or “Understanding Governance in…” The abstract can introduce a new feature or concept. If so, then the problem is "there's a new thing that you don't know about yet." The solution is "this feature does X, Y & Z." Then tell them that they'll learn everything about X or a little bit about X, Y & Z. If you include an acronym, make sure you spell it out first. Not everyone is going to know what it means, and you don't want only a room full of attendees that already know everything. Avoid using many buzz words in you title or abstract. Drop it into Copilot for a final check and use prompts to help improve your work. Use "make it more precise" if it's over the character count. Tey "make it more professional" if you think it's more casual than you intended. Final check: Before you hit submit, run your abstract by a friend. Does it make sense to the technical and/or non-technical? Do the grammar & spelling check out? With allowances for English not being your 1st language, the abstract should show that you can effectively communicate the topic to an audience. Does the level assigned match the abstract? Is it in the right track? Most calls for content provide definitions on what's a good fit for content in each topic. You can find the topic definitions for FabCon/SQLCon Europe here. Can you do ALL THIS in 60 minutes? Does it fit within the character count? Lastly, a few "don'ts" Don't include your name in the abstract if you know the review is done blind. Don't add it to every category or track. Be mindful. Don't email organizers to say you've submitted a session. They know. Don't demand feedback the very day you're declined. Well, never demand it, but waiting a week to politely ask never hurts. Respect the decision if they aren’t able to offer you feedback. Remember that 800 people might be asking the same question. Don't be entitled. No one *stole* your spot. It was never yours to begin with. Don't use AI to write your entire abstract. Reviewers typically know when it's not written by a human or there are tools that help them check. If you can't convey a concept on your own in 400 characters, how can we trust that you can speak on the concept for a full hour?448Views3likes0CommentsJoin us at SQLCon 2026
SQLCon 2026, the premier Microsoft SQL Community Conference, is co-located with the Microsoft Fabric Community Conference (FabCon) from March 16-20, 2026! Register today! This year, your Azure Data Community worlds collide, bringing the SQL and Fabric communities together for a brand-new experience. If you're passionate about SQL Server, Azure SQL, SQL in Fabric, SQL Tools, migration and modernization, database security, or building AI-powered apps with SQL, SQLCon 2026 is the place for you. We're bringing you an amazing keynote in the State Farm Arena, 50+ breakout sessions, and 4 expert-led workshops designed to help you optimize, innovate, and connect just on SQL alone! Join us in Atlanta where we're going to renew the SQL Community spirit - rebuild & reconnect with your networks, friendships, and career-building opportunities. SQLCon is just one part of our renewed commitment to connect at conferences, user groups, online, and at regional events. In a few words, we miss you. Why are SQLCon + FabCon better together? One registration, double the value: Register for either conference and get full access to both - mix and match sessions, keynotes, and community events to fit your interests. Shared spaces, shared energy: Enjoy the same expo hall, registration desk, conference app, and community lounge. Network with peers across the data platform spectrum. Unforgettable experiences: Join us for both keynotes at the State Farm Arena and celebrate at the legendary attendee party at the Georgia Aquarium. Register today and save $200 with code SQLCMTY200. Register Now! Let's build the future of data together. See you at SQLCon + FabCon!601Views0likes0CommentsSQL + AI: Get ready for a fun new experience, launching next week!
The SQL community is leveling up! We’re launching a lightweight, beginner friendly SQL + AI contest designed to spark awareness of SQL Server 2025’s new AI capabilities—from connecting to an AI model, to generating embeddings, to exploring what’s possible when natural language intelligence meets your database. Building on the momentum, each week in February unlocks a new chapter of the story - from your first steps connecting SQL to AI, to exploring embeddings, to experimenting with lightweight RAG patterns, and finally orchestrating full end-to-end workflows. This will be hands-on, guided, and fun! You don’t need to be an AI expert to participate. Each challenge builds on the last, helping you discover what’s possible when SQL’s new capabilities meet real world scenarios — all in a fun, approachable format designed to spark curiosity and get people talking. Join us for some SQL fun starting next week, February 5th! Stay tuned to this post for more details! Get ready to build, discover, and play. SQL + AI is here.5KViews9likes1CommentAnnouncing GA of SQL Server 2025 on RHEL 10 and Ubuntu 24.04, with additional enhancements for Linux deployments
We are excited to announce the General Availability (GA) of SQL Server 2025 on Red Hat Enterprise Linux (RHEL) 10 and Ubuntu 24.04, starting with the CU1 release. This milestone empowers enterprises to deploy SQL Server 2025 on the latest Linux distributions, ensuring robust compatibility, enhanced security, and optimal performance for mission-critical workloads. The GA reinforces our commitment to delivering a modern, secure, and AI-ready database platform across diverse Linux environments. CU1 Release Highlights Key Features: Production-Ready Support: GA support for RHEL 10 and Ubuntu 24.04, enabling production deployments on the newest Linux platforms. Important: Update your SQL Server repository To ensure you are installing the production-ready GA version of SQL Server 2025, it is essential to update your repository from mssql-server-preview.repo to mssql-server-2025.repo. Continuing to use the preview repository may result in installing pre-release builds that are not intended for production workloads. You can proceed with the installation steps as described in the official documentation: RHEL: Install SQL Server on Linux Ubuntu: Install SQL Server on Linux Enable database creation or restoration in Contained Availability Group sessions You can now use the session context key 'allow_cag_create_db' set via the existing stored procedure sp_set_session_contex to enable database creation and restoration directly within a contained availability group (CAG) session through the CAG listener. Only users with the dbcreator role can create databases in a CAG session, and only users with the db_owner or sysadmin role can restore databases. Example: EXEC sp_set_session_context @key = N'allow_cag_create_db', @value = 1; This command enables the feature for your session. To disable it, set @value = 0. Enhanced Observability: SQL Server 2025 on Linux introduces new Dynamic Management Views (DMVs) that provides comprehensive, system-level insights. These DMVs collect and expose metrics from the underlying operating system, allowing administrators to monitor and diagnose not only SQL Server’s performance, but also the impact of other processes running on the same host. This holistic visibility helps distinguish between issues caused by SQL Server and those originating from other workloads or infrastructure bottlenecks, enabling more effective troubleshooting and optimization for mission-critical deployments. sys.dm_os_linux_cpu_stats: Enables users to identify CPU saturation, investigate I/O waits, analyze system responsiveness, and correlate SQL Server performance with broader OS activity, helping to separate database-specific bottlenecks from infrastructure-wide challenges. sys.dm_os_linux_disk_stats: Facilitates detection of log flush slowness, assessment of checkpoint or read-heavy workloads, and identification of external I/O pressure from noisy neighbors on shared hosts, helping teams optimize storage tiers and queue settings. sys.dm_os_linux_net_stats: Provides real-time visibility into network interface statistics, enabling proactive monitoring and troubleshooting of connectivity and throughput issues. Collectively, these DMVs equip database professionals with actionable, OS-level metrics to maintain stability, optimize performance, and ensure reliability in mission-critical Linux deployments. References What’s New in SQL Server 2025 Contained Availability Group You can share your feedback using any of the following methods: Submit your ideas on Azure Ideas (Use the SQL Server on Linux Group on the left side of the page) Alternatively, you can open issues: Issues · microsoft/mssql-docker (github.com) on GitHub. We hope you give SQL Server 2025 CU1 on RHEL10 or Ubuntu 24.04 a try - and we look forward to hearing what you think!593Views1like1CommentRun a SQL Query with Azure Arc
Hi All, In this article, you can find a way to retrieve database permission from all your onboarded databases through Azure Arc. This idea is born from a customer request around maintaining a standard permission set, in a very wide environment (about 1000 SQL Server). This solution is based on Azure Arc, so first you need to onboard your SQL Server to Azure Arc and enable the SQL Server extension. If you want to test Azure Arc in a test environment, you can use the Azure Jumpstart, in this repo you will find ready-to-deploy arm templates the deploy demos environments. The other solution components are an automation account, log analytics and a Data collection rule \ endpoint. Here you can find a little recap of the purpose of each component: Automation account: with this resource you can run and schedule a PowerShell script, and you can also store the credentials securely Log Analytics workspace: here you will create a custom table and store all the data that comes from the script Data collection Endpoint / Data Collection Rule: enable you to open a public endpoint to allow you to ingest collected data on Log analytics workspace In this section you will discover how I composed the six phases of the script: Obtain the bearer token and authenticate on the portal: First of all you need to authenticate on the azure portal to get all the SQL instance and to have to token to send your assessment data to log analytics $tenantId = "XXXXXXXXXXXXXXXXXXXXXXXXXXX" $cred = Get-AutomationPSCredential -Name 'appreg' Connect-AzAccount -ServicePrincipal -Tenant $tenantId -Credential $cred $appId = $cred.UserName $appSecret = $cred.GetNetworkCredential().Password $endpoint_uri = "https://sampleazuremonitorworkspace-weu-a5x6.westeurope-1.ingest.monitor.azure.com" #Logs ingestion URI for the DCR $dcrImmutableId = "dcr-sample2b9f0b27caf54b73bdbd8fa15908238799" #the immutableId property of the DCR object $streamName = "Custom-MyTable" $scope= [System.Web.HttpUtility]::UrlEncode("https://monitor.azure.com//.default") $body = "client_id=$appId&scope=$scope&client_secret=$appSecret&grant_type=client_credentials"; $headers = @{"Content-Type"="application/x-www-form-urlencoded"}; $uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token" $bearerToken = (Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers).access_token Get all the SQL instances: in my example I took all the instances, you can also use a tag to filter some resources, for example if a want to assess only the production environment you can use the tag as a filter $servers = Get-AzResource -ResourceType "Microsoft.AzureArcData/SQLServerInstances" When you have all the SQL instance you can run your t-query to obtain all the permission , remember now we are looking for the permission, but you can use for any query you want or in other situation where you need to run a command on a generic server $SQLCmd = @' Invoke-SQLcmd -ServerInstance . -Query "USE master; BEGIN IF LEFT(CAST(Serverproperty('ProductVersion') AS VARCHAR(1)),1) = '8' begin IF EXISTS (SELECT TOP 1 * FROM tempdb.dbo.sysobjects (nolock) WHERE name LIKE '#TUser%') begin DROP TABLE #TUser end end ELSE begin IF EXISTS (SELECT TOP 1 * FROM tempdb.sys.objects (nolock) WHERE name LIKE '#TUser%') begin DROP TABLE #TUser end end CREATE TABLE #TUser (DBName SYSNAME,[Name] SYSNAME,GroupName SYSNAME NULL,LoginName SYSNAME NULL,default_database_name SYSNAME NULL,default_schema_name VARCHAR(256) NULL,Principal_id INT); IF LEFT(CAST(Serverproperty('ProductVersion') AS VARCHAR(1)),1) = '8' INSERT INTO #TUser EXEC sp_MSForEachdb ' SELECT ''?'' as DBName, u.name As UserName, CASE WHEN (r.uid IS NULL) THEN ''public'' ELSE r.name END AS GroupName, l.name AS LoginName, NULL AS Default_db_Name, NULL as default_Schema_name, u.uid FROM [?].dbo.sysUsers u LEFT JOIN ([?].dbo.sysMembers m JOIN [?].dbo.sysUsers r ON m.groupuid = r.uid) ON m.memberuid = u.uid LEFT JOIN dbo.sysLogins l ON u.sid = l.sid WHERE (u.islogin = 1 OR u.isntname = 1 OR u.isntgroup = 1) and u.name not in (''public'',''dbo'',''guest'') ORDER BY u.name ' ELSE INSERT INTO #TUser EXEC sp_MSforeachdb ' SELECT ''?'', u.name, CASE WHEN (r.principal_id IS NULL) THEN ''public'' ELSE r.name END GroupName, l.name LoginName, l.default_database_name, u.default_schema_name, u.principal_id FROM [?].sys.database_principals u LEFT JOIN ([?].sys.database_role_members m JOIN [?].sys.database_principals r ON m.role_principal_id = r.principal_id) ON m.member_principal_id = u.principal_id LEFT JOIN [?].sys.server_principals l ON u.sid = l.sid WHERE u.TYPE <> ''R'' and u.TYPE <> ''S'' and u.name not in (''public'',''dbo'',''guest'') order by u.name '; SELECT DBName, Name, GroupName,LoginName FROM #TUser where Name not in ('information_schema') and GroupName not in ('public') ORDER BY DBName,[Name],GroupName; DROP TABLE #TUser; END" '@ $command = New-AzConnectedMachineRunCommand -ResourceGroupName "test_query" -MachineName $server1 -Location "westeurope" -RunCommandName "RunCommandName" -SourceScript $SQLCmd In a second, you will receive the output of the command, and you must send it to the log analytics workspace (aka LAW). In this phase, you can also review the output before sending it to LAW, for example, removing some text or filtering some results. In my case, I’m adding the information about the server where the script runs to each record. $array = ($command.InstanceViewOutput -split "r?n" | Where-Object { $.Trim() }) | ForEach-Object { $line = $ -replace '\', '\\' ù$array = $array | Where-Object { $_ -notmatch "DBName,Name,GroupName,LoginName" } | Where-Object {$_ -notmatch "------"} The last phase is designed to send the output to the log analytics workspace using the dce \ dcr. $staticData = @" [{ "TimeGenerated": "$currentTime", "RawData": "$raw", }]"@; $body = $staticData; $headers = @{"Authorization"="Bearer $bearerToken";"Content-Type"="application/json"}; $uri = "$endpoint_uri/dataCollectionRules/$dcrImmutableId/streams/$($streamName)?api-version=2023-01-01" $rest = Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers When the data arrives in log analytics workspace, you can query this data, and you can create a dashboard or why not an alert. Now you will see how you can implement this solution. For the log analytics, dce and dcr, you can follow the official docs: Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Resource Manager templates) - Azure Monitor | Microsoft Learn After you create the dcr and the log analytics workspace with its custom table. You can proceed with the Automation account. Create an automation account using the creating wizard You can proceed with the default parameter. When the Automation Account creation is completed, you can create a credential in the Automation Account. This allows you to avoid the exposition of the credential used to connect to Azure You can insert here the enterprise application and the key. Now you are ready to create the runbook (basically the script that we will schedule) You can give the name you want and click create. Now go in the automation account than Runbooks and Edit in Portal, you can copy your script or the script in this link. Remember to replace your tenant ID, you will find in Entra ID section and the Enterprise application You can test it using the Test Pane function and when you are ready you can Publish and link a schedule, for example daily at 5am. Remember, today we talked about database permissions, but the scenarios are endless: checking a requirement, deploying a small fix, or removing/adding a configuration — at scale. At the end, as you see, Azure Arc is not only another agent, is a chance to empower every environment (and every other cloud provider 😉) with Azure technology. See you in the next techie adventure. **Disclaimer** The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.GitHub Copilot + SQL Server: Understanding the Security Analyzer
What problem is this trying to solve? Many security issues in applications come from the database layer: poorly written queries, dynamic SQL, or code that exposes more data than it should. These problems are often hard to spot, especially in large or older codebases. The MSSQL extension for VS Code (v1.37+) now integrates GitHub Copilot with a dedicated chat participant: mssql. One of its most useful capabilities is the Security Analyzer, which reviews your T-SQL code and highlights potential security weaknesses. This is not just a generic AI model reading text. The tool connects to your SQL Server or Azure SQL database and uses the real context of your environment: your schema, tables, views, and stored procedures. That context allows it to give much more precise and relevant guidance. Where does it work? The Security Analyzer supports: • SQL Server 2019, 2022, 2025 (Windows, Linux, containers) • Azure SQL Database • Azure SQL Managed Instance • SQL database in Fabric If you run a mix of on-premises, cloud, or older environments, you can still use the same tool and interface across them. What can the Security Analyzer do? Based on the official documentation and early testing, typical use cases include: 1️⃣ Detecting SQL injection risks It reviews stored procedures and queries to find unsafe dynamic SQL, string concatenations used to build queries, or risky use of EXEC. These patterns are common entry points for SQL injection attacks. 2️⃣ Identifying data overexposure It can point out views or queries that return sensitive columns (such as personal data or credentials) without masking or filtering them appropriately. 3️⃣ Recommending stronger protections It suggests improvements such as encrypting connections, using Always Encrypted, applying Dynamic Data Masking, or preferring Entra ID authentication instead of storing credentials in code or configuration. 4️⃣ Illustrating how an attack might work In some cases, it can generate realistic SQL injection payload examples based on your schema. This helps you understand the practical impact of a vulnerability, not just the theory. How to try it You will need: • VS Code with the MSSQL extension (v1.37+) • Your GitHub Copilot subscription • A connection to a SQL Server or Azure SQL database (a dev database is recommended) Sample DB: https://github.com/Microsoft/sql-server-samples/releases/tag/wide-world-importers-v1.0 As a starting point, connect to a sample or development database (for example, AdventureWorks). Then open the Copilot chat and try prompts such as: "@mssql Review the stored procedure SalesLT.uspGetCustomerOrderHistory for potential SQL injection vulnerabilities" "@mssql What security best practices should I verify for the SalesLT schema?" The tool will analyze the referenced objects and return recommendations based on the real structure of your database. It is possible that the AI‑generated content is incorrect. You remain responsible for reviewing, validating, and approving it before any use. Do not rely on this output without thorough human verification. Not intended for production use. Important limitations The Security Analyzer is helpful, but it has boundaries you should be aware of: Conversational, not a batch scanner: There is no built-in "scan everything" button. To review many procedures, you need to guide it or script interactions. Depends on context: If it is not connected to your database, it falls back to more generic suggestions that may be less useful. Can be wrong: Like all large language models, it can occasionally refer to objects that do not exist or misinterpret a situation when the context is incomplete. Always review its advice before making changes. Not a formal security audit: It is designed to help in day-to-day development and maintenance, not to serve as a compliance or certification tool. Learn more Official documentation: Security Analyzer - GitHub Copilot for MSSQL https://learn.microsoft.com/sql/tools/visual-studio-code-extensions/github-copilot/security-analyzer Extension overview: GitHub Copilot for MSSQL Extension https://learn.microsoft.com/sql/tools/visual-studio-code-extensions/github-copilot/overviewDeprecation Announcement: SSRS/PBIRS and SSAS Management Packs for SCOM
Microsoft is announcing the deprecation of the System Center Operations Manager (SCOM) Management Packs (MPs) for: SQL Server Reporting Services (SSRS) and Power BI Report Server (PBIRS) SQL Server Analysis Services (SSAS) These management packs will remain available through January 2027, after which they will no longer be supported.1.8KViews0likes1CommentAnnouncing SQLCon 2026: Better Together with FabCon!
We’re thrilled to unveil SQLCon 2026, the premier Microsoft SQL Community Conference, co-located with the Microsoft Fabric Community Conference (FabCon) from March 16–20, 2026! This year, we’re bringing the best of both worlds under one roof—uniting the vibrant SQL and Fabric communities for a truly next-level experience. Whether you’re passionate about SQL Server, Azure SQL, SQL in Fabric, SQL Tools, migration and modernization, database security, or building AI-powered apps with SQL, SQLCon 2026 has you covered. Dive into 50+ breakout sessions and 4 expert-led workshops designed to help you optimize, innovate, and connect. Why are SQLCon + FabCon better together? One registration, double the value: Register for either conference and get full access to both—mix and match sessions, keynotes, and community events to fit your interests. Shared spaces, shared energy: Enjoy the same expo hall, registration desk, conference app, and community lounge. Network with peers across the data platform spectrum. Unforgettable experiences: Join us for both keynotes at the State Farm Arena and celebrate at the legendary attendee party at the Georgia Aquarium. Our goal is to reignite the SQL Community spirit—restoring the robust networks, friendships, and career-building opportunities that make this ecosystem so special. SQLCon is just the beginning of a renewed commitment to connect at conferences, user groups, online, and at regional events. Early Access Pricing Extended! Register by November 14th and save $200 with code SQLCMTY200. Register Now! Want to share your expertise? The Call for Content is open until November 20th for both conferences! Let’s build the future of data—together. See you at SQLCon + FabCon!6.1KViews8likes1CommentResource governor - a new beginning
Executive summary In addition to increased CPU and memory limits, we are making another change to benefit customers using the SQL Server Standard edition. Starting with SQL Server 2025, resource governor, previously an Enterprise edition feature, is also available in the Standard edition. A powerful tool Resource governor has been a part of SQL Server for more than 15 years. It’s a powerful, full-featured tool that improves reliability when multiple applications or users share resources on the same SQL Server instance. We rely on Resource Governor to isolate workloads on our SQL Server instances by controlling CPU and memory usage. It helps us ensure that the core of our trading systems remains stable and runs with predictable performance, even when other parts of the systems share the same servers. - Ola Hallengren, Chief Data Platforms Engineer at Saxo Bank and a Data Platform MVP Yet when we ask customers if they use resource governor, a frequent answer is "We can't. We are on Standard edition." Indeed, ever since its initial release, resource governor has been an Enterprise-only feature. That made it less known and less commonly used than it deserves. We heard and acted on your feedback. Starting with SQL Server 2025, resource governor is available in both Standard and Enterprise editions, with identical capabilities. This includes the respective Developer editions as well. In Azure SQL Managed Instance, resource governor is already available in all service tiers. What can it do? Here are just some of the things you can accomplish using resource governor: Limit or reserve CPU bandwidth and IOPS for a workload. Limit the size of query memory grants or reserve memory for query processing. Limit the total number of running requests (queries) for a workload. Abort query batches that exceed a specified amount of CPU time. Set a hard cap on the maximum degree of parallelism (MAXDOP) for queries. (New in SQL Server 2025) Limit the amount of tempdb space a workload can use (blog | doc). Whether you are on Standard or Enterprise edition, if you haven't used resource governor yet, see what it can do for you. To help you learn, we have a new step-by-step guide with best practices and examples, from simple to more advanced. There is also a separate guide focusing on the new tempdb space governance feature. Conclusion With this change, customers on Standard edition get a powerful enterprise-grade tool to control resource contention and improve reliability of their SQL Server environments. We encourage you to learn more about resource governor to understand the full range of its capabilities. Have questions or feedback? Post a comment on this blog post, an idea at https://aka.ms/sqlfeedback, or email us at sql-rg-feedback@microsoft.com. Let’s make resource governor a standard (😊) tool in your toolset!3.9KViews1like5Comments