sql
224 TopicsSQL Agent Jobs in an AlwaysOn / Mirroring Environment
Running SQL Agent Jobs in an AlwaysOn or Database Mirroring environment can be a challenge, as the database might be not accessible at the time when the job starts. This is especially the case for all SAP collector and backup jobs. On the other hand there might be jobs that should not run on primary server (e.g. DBCC Checks), but on a readable secondary or readonly standby database (Log Shipping). In the worst case either the jobs fail or run on the wrong server. To prevent this, SAP created a small stored procedure to control the run of these kind of jobs. To use the stored procedure, use this script use master; GO IF (OBJECT_ID(N'dbo.CheckAccess') IS NOT NULL ) DROP PROCEDURE [dbo].CheckAccess GO CREATE PROCEDURE dbo.CheckAccess @DBName AS NVARCHAR(128) = N'master', @RunOnPrimary AS TINYINT = 1, @RunOnSecondary AS TINYINT = 0, @RunOnReadonly AS TINYINT = 0, RC AS TINYINT = 0 OUTPUT AS BEGIN DECLARE @IsReadOnly TINYINT; DECLARE @State TINYINT; DECLARE @StandBy TINYINT; DECLARE @UserAccess TINYINT; DECLARE @AOState TINYINT; DECLARE @AORole TINYINT; DECLARE @Message NVARCHAR(50); SET RC = 0; SET @Message = N'Everything is OK.'; IF EXISTS(SELECT TOP 1 database_id FROM sys.databases WHERE database_id = DB_ID(@DBName)) BEGIN -- Check for the normal database state SELECT @IsReadOnly = is_read_only, @State = state, @StandBy = is_in_standby, @UserAccess = user_access FROM sys.databases WHERE name = @DBName; IF NOT ((@State + @UserAccess) = 0 AND (@IsReadOnly <= @RunOnReadonly) AND (@StandBy <= @RunOnReadonly)) BEGIN SET RC = 1; SET @Message = N'Database is NOT accessible.'; END; END ELSE BEGIN SET RC = 2; SET @Message = N'Database doesn''t exists.'; END -- Check if we are running in an AlwaysOn Configuration IF EXISTS(SELECT TOP 1 database_id FROM sys.dm_hadr_database_replica_states WHERE database_id = DB_ID(@DBName)) BEGIN SELECT @AOState = db.database_state, @AORole = rs.role FROM sys.dm_hadr_database_replica_states AS db INNER JOIN sys.dm_hadr_availability_replica_states AS rs ON db.group_id = rs.group_id WHERE db.is_local = 1 AND rs.is_local = 1 AND db.database_id = DB_ID(@DBName) IF NOT(@AOState = 0 AND ((@RunOnPrimary != 0 AND @RunOnPrimary = @AORole) OR (@RunOnSecondary != 0 AND @RunOnSecondary = @AORole - 1))) BEGIN SET RC = 3; SET @Message = N'The database is not in the required state for Always-ON access.'; END; END; PRINT @Message END; GO and run it on all nodes of the affected system (either AlwaysOn, Database Mirroring or Log Shipping). It will create a stored procedure CheckAccess in the master database. The procedure has 5 parameters: Parameter Data Type Default Value Description @DBName NVARCHAR(128) N'master' Database to check. @RunOnPrimary TINYINT 1 Should the job run on the primary database ? @RunOnSecondary TINYINT 0 Should the job run on the secondary database ? @RunOnReadonly TINYINT 0 Should the job run on the read only databases (Log Shipping) ? RC TINYINT 0 Return code (0 = Success, 1 = Failure) With the parameters @RunOnPrimary, @RunOnSecondary and @RunOnReadonly you can control in which state of the AlwaysOn database the Job should start. A combination of these parameters is possible, so the job can run on primary and secondary databases if you set both to 1. Even if you do not use AlwaysOn or readonly databases or if you set all the options to 0 the stored procedure will check for the availability of the database. It will return a failure (1) if the database is absent, offline, suspect, in recovery and all the other states that won't let you access the database. A valid call looks like this (Check the TST database if it is accessible and check if it is the primary database in an AlwaysOn configuration): exec dbo.CheckAccess N'TST',1,0,0,@RC OUTPUT To enable the CheckAccess stored procedure for jobs, we have to change the job a bit. In this example it is an SAP Collector Job (SAP_SCU_SAPMonitor_TST_MSSQL_COLLECTOR) for the TST SAP database. Open the properties for the job (right click -> Properties) Switch to the Steps branch on the left and edit the first step with the name DBCOLL. Change the database to master (as the stored procedure lives there) and change the script from EXECUTE sap_dbcoll to DECLARE @Msg NVARCHAR(128) DECLARE RC TINYINT exec dbo.CheckAccess N'TST',1,0,0,@RC OUTPUT IF RC = 0 exec( 'use [TST] EXECUTE sap_dbcoll' ) Ensure that you change the database accordingly (here TST) for your system. The step then should look like: On the advanced tab change the 'On failure action' field to 'Quit the job reporting failure' and save and exit the job definition. This will ensure that the DBCOLL part will only start when the database is accessible and is the primary database in the AlwaysOn Configuration. You can use this stored procedure for any job, when you follow this generic approach: DECLARE @Msg NVARCHAR(128) DECLARE RC TINYINT exec dbo.CheckAccess N'<Database in Question>',1,0,0,@RC OUTPUT IF RC = 0 exec( 'use [<Database in Question>] <Step definition>' ) Regards ClasAlways On - Synchronize SAP login, jobs and objects
SQL Server AlwaysOn is one of the High Availability solutions available for an SAP system. It consists of two or more computers each hosting a SQL Server with a copy of the SAP database. A listener points to the actual primary copy and is used from the SAP system as the only connection point. For details how to setup and configure an SAP system together with SQL Server AlwaysOn see this blog post and its referenced blog posts. During the setup the SAP System is configured from the current primary node and all non-database related objects such as SQL Server Agent Jobs, logins etc. are created only on the current primary database. In a case of a (automatic) failover to one of the secondary nodes of AlwaysOn these objects are then missing. Jürgen has introduced a script (sap_helprevlogin) in his initial blog post about the database load after setting up AlwaysOn. This script will transfer only the logins, but will fall short on transferring jobs, server level permissions and other assignments. One of the SAP developers working in our team has built a comprehensive PowerShell script (sap_synchronize_always_on.ps1) to perform all these tasks and to transfer all the SAP objects from the initial installation to all the other nodes of the AlwaysOn system. The script connects to the primary instance, reads the configuration of the secondary nodes and then synchronizes the objects and jobs with these nodes. The script must be executed by a domain administrator which has SQL Server sysadmin privileges on all AlwaysOn instances. The script uses up to three input variables: The server name of the SQL Server instance or the listener name of the High-Availability group. The default is (local) The name of the SAP database, which must be in an High-Availability group on the given server Single login (optional): Only one login gets copied along with SAP CCMS jobs owned by the login. By default all logins mapped to the database are copied. The script will execute: Create a procedure CheckAccess in the master database (see this blog about the details about it) Discover which logins are mapped to the database Discover which SAP CCMS jobs belong to those logins If the job does not use CheckAccess then change the job step to use CheckAccess and run the job step in master Open a connection to each secondary and: Create procedure CheckAccess in the master database Create the logins if they don't exist already using the same sid. Create the jobs if they don't exist already. If a job exists and if the job does not CheckAccess then change the job step to use CheckAccess and run in master If new SAP CCMS jobs are added because of remote monitoring from a different SAP system using DBACOCKPIT, the script can be re-executed. It will then copy only new objects which have not been copied before. You can find this useful script attached, which makes the synchronization of the SAP Systems in an AlwaysOn environment so much easier. Please ensure that you test the execution in your test environment first, before you run it in production. Neither SAP nor Microsoft takes any responsibility from using this script, you run it on your own risk. Update January 2017: New script version that copies the sid<adm> and SAPService<SID> logins from the SAP System as well. Best regards | Bless! Clas & GuðmundurDeadlock Analysis (Examples)
As promised in my former blog about Deadlock Analysis we proceed today with the deadlock serie with examples and their solutions. Update Locks The first example is a Deadlock with two identical statements, doing deletes on table BC_MON_AGREGATES The statements are: DELETE FROM BC_MON_AGREGATES WHERE AGR_INTERVAL = @P0 AND START_TIME < ; As you can see in the deadlock graph both threads are having (at least) a X locks on a index named PK__BC_MON_A__3214EC272B5F6B28, the primary key on this table. And both are requesting an U lock on the same index. As the U locks are only used for checking if the row matches the WHERE condition, this constellation is very often seen when the threads are scanning the table or index instead of seeking through an index to find the rows. In this case the solution was to create an fitting index for the WHERE clause to get the two thread to seek instead to scan the index. RID Locks In the next case we see, that both threads are requesting RID locks on table GENSETC: The statements were: UPDATE GENSETC SET NAME = ,COMPONENT = ,GENSTATUS = @P3 WHERE PGMID = @P4 AND OBJECT = @P5 AND OBJNAME = P6 RID locks are only used if there is no clustered index on that table. If there is an clustered index, we will see KEY locks. The root cause for this deadlock was the missing clustered index as defined in the SAP dictionary (PGMID, OBJECT, OBJNAME). PAGE locks Here we see PAGE locks on table /1OT/FDA12IT2001: Both statements were running a DELETE on that table: DELETE FROM /1OT/FDA12IT2001 WHERE CLIENT = AND ORDID IN ( ,.....,@P101 ) PAGE locks are used when the SQL Server Optimizer expects a lot of rows to process. The PAGE locks are used from the beginning, means this is not an lock escalation, as there are only lock escalations from PAGE to TABLE and from ROW to TABLE, but not from ROW to PAGE. In this case the solution can be to disallow PAGE locks on the table, to reduce the IN clause or to create an fitting index for the WHERE clause (index on ORDID). Deadlock with a SELECT In the next deadlock we have an UPDATE and a SELECT deadlocking each other: The SELECT is easy to identify by the S locks it is using, the right one is the UPDATE using an fitting index (no scanning). How can a SELECT be part of a deadlock, as SAP is reading most of the data in "READ UNCOMMITED" ? The answer is, that this SELECT runs in "READ COMMITED" mode, means it only reads data that is is already committed and it will be blocked through U and X locks. The application that issues the UPDATE and SELECT statement has to analyze this deadlock and to avoid the parallel execution of these two statement by e.g. semaphores. Multiple Tables A deadlock not only happens between two threads, the SQL Server can detect deadlocks in any depth. A deadlock between three threads is looking like: The SQL Server will kill the thread with the least amount of log used, in this case the left one. In some case the remaining threads will deadlock again as a two node deadlock, but most of the time the deadlock is resolved by kill one out of the chain. The most impressive deadlock I saw was this one: This is a deadlock with 103 threads, the left end with the victim looks like: The victim was an UPDATE on BC_JMSQUEUE and all the other waiting ones were a DELETE on the same table. The root cause was, that there was no fitting index for the DELETE statement, which causes the DELETE to scan the table instead using an index. After creating the index, the problem was resolved. Different tables Deadlock can happen not only on the same table, but between different tables as well: We see a deadlock between table CBS_ACCESSQUEUE and CBS_LOCK. The statements are UPDATE statements on each table. These kind of deadlock are not easy to understand as it only shows the end of the deadlock. The two statements that let to the two initial locks (the KEY lock ACCESSQUEUE from the victim and the KEY lock on CBS_LOCK of the winner) are not part of the deadlock, but might be part of the problem. How to analyse such deadlocks will be part of my third post of this deadlock serie.Deadlock Analysis (Advanced)
With this blog my small deadlock serie will be completed. We started with the basics , then we discussed several examples and today we will close up with a more advanced approach to solve deadlocks. The intention to build this solution was to get the complete transactions that led to the deadlock, as in some cases the first part of the deadlock (getting the first set of locks) is important to understand and to solve the deadlock. There are three phases in this approach: Collect, Prepare and Analyse, as also described in SAP note 1774085. Collect In the Collect phase we have to set up an SQL Server Profiler trace to get all the statements that belong to the participating threads. Download the attached documents to any directory. To set up and start the profiler trace, proceed as follows: Log on to your SQL Server with the <sid>adm user or any other local administrator Open the registry and navigate to [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\Instance Names\SQL] and find your SQL Server Instance name on the list (MSSQLSERVER = Default Instance). Keep the Data value of your instance (e.g. MSSQL.1). Change the registry key "NonAlertableErrors" at this place (replace <XXX> with the found data) [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\<XXX>\SQLServerAgent] Change "NonAlertableErrors"="1204,4002" to "NonAlertableErrors"="4002" Restart the SQL Server Agent Service. Open a new query window and switch to the master Database Load the script from the file Setup Delayed Profilerstop at Deadlock.docx and change the line that starts with set @DirectoryName = N'C:.....' to a file location with enough free space. The default is the directory c:\temp. You need at least 2.5 GB free space in this directory. Save and run the changed script. It will return the ID of the created trace (usually the number 2) and a list of active traces. The trace with the ID = 1 is the built-in default trace in SQL Server 2005 and higher. The script will create and start an SQL Profiler Trace, a SQL Agent Job and an Alert for the SQL Error 1204 (Deadlock). The trace will write 250MB trace files. In SQL 2005 and higher the number of files is limited to 10, under SQL 2000 there is no such limit of files created. Please watch the directory to avoid that the disk is filled up. You can delete older trace files, but you should keep at least the five newest files. As soon the error 1204 occurs in that database, the alert gets fired and will invoke the SQL Server Agent Job. The job will stop the trace after 20 seconds and will clean up all the created stored procedures and the job itself. If you want to stop the trace manually, just start the job with the name 'Stop Profiler at Deadlock'. This will stop the trace and will remove all the objects that the script has created. When done, revert the NonAlertableErrors in the registry to the original value and restart the SQL Server Agent Service. Prepare In this step, the trace is imported into a new database to analyze it. Open the SQL Server Management Studio and connect to the SQL Server instance, where you want to perform the analysis. This does not have to be the instance where the deadlocks occur. It is recommended to NOT run the analysis on the productive SQL Server instance, even when the deadlock occurs there. Open the script 'Step 1 - Read Into a Table.docx' into the query editor and do the following changes: Ensure that the name [DeadlockBlogAdv] is not used on your system as a database name or replace all [DeadlockBlogAdv] entries with a name of a temporary database. Change the path and filename of the line that looks like this: FROM fn_trace_gettable ('C:\Temp\SAP-SQLTrace-2012.10.09-14.11.21', default); Change it to the path and filename of the first file of the collected profiler trace of the Collect step. The SQL Server will then read this file and all subsequent files of the same trace that have the same name and trailing, increasing numbers (e.g. _1, _2 etc.). Execute the script. It creates the database and a trace table and reads the given trace file and all subsequent existing files into this table. Furthermore, it creates some helpful indices on the table to speed up the identification process of the deadlock transactions. If full text search is installed it creates a full text index on the TextData column to enable fuzzy searches. It returns the start and end time of the trace and the number of rows imported. As a last step it will shrink the log of the database as it is no longer needed. Open the script 'Step 2 - Create Data View. sql' and execute it in the same database as the first one. It creates a view over the Trace table to get more readable information from the trace. Analysis As we now have everything prepared, please open and execute the script 'Step 3 - Get Winner and Victims.docx'. It reads the deadlock graph(s) from the trace (the same graph(s) that you get when you follow SAP note 32129 ) and extracts the winner and victims of the deadlock. The result is separated in four sets: The first one gives you a clickable link to open the deadlock in a XML view and a overview over the transactions and their last statements. When you save the XML as a xdl file you can get the same output as in my last blog posts. The second set lists the exact time of the deadlock (when the lock timeout occurs) and the row numbers in the trace file. The third and fourth set gives you the complete transactions of the winner (top) and victim (bottom) transaction. The victim transaction always ends with a 'Lock_Timeout' followed by an exception in the EventClassDesc column which you find close to the bottom. The statement that caused the deadlock is shown before the 'Lock_timeout' as 'Statement Starting' with no matching 'Statement Completed'. The winner also shows a 'Lock_timeout' at the same time, but with the 'Statement Completed' of the Deadlock Statement right after the 'Lock_Timeout'. I have seen multiple victims against one winner. This happens when the winner has already a lot of log generated (see the basics ) and multiple, very fast victims loose against the winner in a short period of time. If the trace contains multiple deadlocks the output looks like this: The different deadlocks can be separated by the column DL (Deadlock) and the matching DL number. In this trace we captured three deadlocks, which are listed here. As an example you can see that in deadlock number 1 the winner and victim deadlocked twice against each other (at 17:22:44.410 and 17:22:44.680). The same is true for deadlock number 2. With this information, you can identify when (Column StartTime) each session (SPID) has acquired locks (TextData) on the given tables (Column TableName). This information can help you to understand the deadlock and then either to modify the application or to create or modify indexes to avoid the deadlock. Have a nice and deadlock free day.SQL Server AG Failover - Automatic Failover
Hello, I am looking for a straight and definitive answer that I was hoping someone could answer for me. I want to trust what Copilot says, but I really need to hear it from Microsoft and I can't find any documentation from Microsoft confirming my question. My Environment: 2 replicas in datacenter 1 1 replica in datacenter 2 All three (3) replicas are set to synchronous-commit mode with automatic failover. I tested the failover manually between all three (3) replicas without issue. When I test the automatic failover - I take down both replicas in datacenter 1 at the same time to simulate a datacenter outage. I look at the replica in datacenter 2 and it is just says (Resolving...) next to the replica name. The replica does not come online and the DB is not moved. When I was searching I couldn't find out why. So I turned to Copilot not solve the issue, but to see if it could point me in the right direction. I tell Copilot my setup and what happened. Copilot responded stating that by design from Microsoft you cannot have more than two (2) replicas set to synchronous-commit mode with automatic failover in a SQL Server AG instance. That if more than two (2) are set for automatic failover. The SQL Server AG will use the first two (2) replicas it sees in its metadata and ignore the rest. Copilot went into detail about why this is designed this way, but the amount of information would make this post longer than it already is. If this is true - then when I took down both replicas in datacenter 1, SQL Server AG only saw those two (2) replicas in datacenter 1 as the available replicas to use for an automatic failover and thus why the replica in datacenter 2 did not come online and the DB not being moved So let's do a test. I brought back up the two (2) replicas in datacenter 1. Then I made a change in the AG proprieties. I set the 2nd replica in datacenter 1 to manual. So 1 replica is set to automatic failover and 1 replica is set to manual failover in datacenter 1. The replica in datacenter 2 is set to automatic failover I then take down both replicas in datacenter 1 again to simulate the "outage" and the replica in datacenter 2 comes online and the DB is moved. So is Copilot right? Can there only be two (2) replicas allowed to have/use automatic failover? I cannot find a definitive answer confirming this. Or is my configuration wrong/missing something and if it is, could you please point me in the right direction on how to get this resolved?120Views0likes4CommentsRun a SQL Query with Azure Arc
Hi All, In this article, you can find a way to retrieve database permission from all your onboarded databases through Azure Arc. This idea is born from a customer request around maintaining a standard permission set, in a very wide environment (about 1000 SQL Server). This solution is based on Azure Arc, so first you need to onboard your SQL Server to Azure Arc and enable the SQL Server extension. If you want to test Azure Arc in a test environment, you can use the Azure Jumpstart, in this repo you will find ready-to-deploy arm templates the deploy demos environments. The other solution components are an automation account, log analytics and a Data collection rule \ endpoint. Here you can find a little recap of the purpose of each component: Automation account: with this resource you can run and schedule a PowerShell script, and you can also store the credentials securely Log Analytics workspace: here you will create a custom table and store all the data that comes from the script Data collection Endpoint / Data Collection Rule: enable you to open a public endpoint to allow you to ingest collected data on Log analytics workspace In this section you will discover how I composed the six phases of the script: Obtain the bearer token and authenticate on the portal: First of all you need to authenticate on the azure portal to get all the SQL instance and to have to token to send your assessment data to log analytics $tenantId = "XXXXXXXXXXXXXXXXXXXXXXXXXXX" $cred = Get-AutomationPSCredential -Name 'appreg' Connect-AzAccount -ServicePrincipal -Tenant $tenantId -Credential $cred $appId = $cred.UserName $appSecret = $cred.GetNetworkCredential().Password $endpoint_uri = "https://sampleazuremonitorworkspace-weu-a5x6.westeurope-1.ingest.monitor.azure.com" #Logs ingestion URI for the DCR $dcrImmutableId = "dcr-sample2b9f0b27caf54b73bdbd8fa15908238799" #the immutableId property of the DCR object $streamName = "Custom-MyTable" $scope= [System.Web.HttpUtility]::UrlEncode("https://monitor.azure.com//.default") $body = "client_id=$appId&scope=$scope&client_secret=$appSecret&grant_type=client_credentials"; $headers = @{"Content-Type"="application/x-www-form-urlencoded"}; $uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token" $bearerToken = (Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers).access_token Get all the SQL instances: in my example I took all the instances, you can also use a tag to filter some resources, for example if a want to assess only the production environment you can use the tag as a filter $servers = Get-AzResource -ResourceType "Microsoft.AzureArcData/SQLServerInstances" When you have all the SQL instance you can run your t-query to obtain all the permission , remember now we are looking for the permission, but you can use for any query you want or in other situation where you need to run a command on a generic server $SQLCmd = @' Invoke-SQLcmd -ServerInstance . -Query "USE master; BEGIN IF LEFT(CAST(Serverproperty('ProductVersion') AS VARCHAR(1)),1) = '8' begin IF EXISTS (SELECT TOP 1 * FROM tempdb.dbo.sysobjects (nolock) WHERE name LIKE '#TUser%') begin DROP TABLE #TUser end end ELSE begin IF EXISTS (SELECT TOP 1 * FROM tempdb.sys.objects (nolock) WHERE name LIKE '#TUser%') begin DROP TABLE #TUser end end CREATE TABLE #TUser (DBName SYSNAME,[Name] SYSNAME,GroupName SYSNAME NULL,LoginName SYSNAME NULL,default_database_name SYSNAME NULL,default_schema_name VARCHAR(256) NULL,Principal_id INT); IF LEFT(CAST(Serverproperty('ProductVersion') AS VARCHAR(1)),1) = '8' INSERT INTO #TUser EXEC sp_MSForEachdb ' SELECT ''?'' as DBName, u.name As UserName, CASE WHEN (r.uid IS NULL) THEN ''public'' ELSE r.name END AS GroupName, l.name AS LoginName, NULL AS Default_db_Name, NULL as default_Schema_name, u.uid FROM [?].dbo.sysUsers u LEFT JOIN ([?].dbo.sysMembers m JOIN [?].dbo.sysUsers r ON m.groupuid = r.uid) ON m.memberuid = u.uid LEFT JOIN dbo.sysLogins l ON u.sid = l.sid WHERE (u.islogin = 1 OR u.isntname = 1 OR u.isntgroup = 1) and u.name not in (''public'',''dbo'',''guest'') ORDER BY u.name ' ELSE INSERT INTO #TUser EXEC sp_MSforeachdb ' SELECT ''?'', u.name, CASE WHEN (r.principal_id IS NULL) THEN ''public'' ELSE r.name END GroupName, l.name LoginName, l.default_database_name, u.default_schema_name, u.principal_id FROM [?].sys.database_principals u LEFT JOIN ([?].sys.database_role_members m JOIN [?].sys.database_principals r ON m.role_principal_id = r.principal_id) ON m.member_principal_id = u.principal_id LEFT JOIN [?].sys.server_principals l ON u.sid = l.sid WHERE u.TYPE <> ''R'' and u.TYPE <> ''S'' and u.name not in (''public'',''dbo'',''guest'') order by u.name '; SELECT DBName, Name, GroupName,LoginName FROM #TUser where Name not in ('information_schema') and GroupName not in ('public') ORDER BY DBName,[Name],GroupName; DROP TABLE #TUser; END" '@ $command = New-AzConnectedMachineRunCommand -ResourceGroupName "test_query" -MachineName $server1 -Location "westeurope" -RunCommandName "RunCommandName" -SourceScript $SQLCmd In a second, you will receive the output of the command, and you must send it to the log analytics workspace (aka LAW). In this phase, you can also review the output before sending it to LAW, for example, removing some text or filtering some results. In my case, I’m adding the information about the server where the script runs to each record. $array = ($command.InstanceViewOutput -split "r?n" | Where-Object { $.Trim() }) | ForEach-Object { $line = $ -replace '\', '\\' ù$array = $array | Where-Object { $_ -notmatch "DBName,Name,GroupName,LoginName" } | Where-Object {$_ -notmatch "------"} The last phase is designed to send the output to the log analytics workspace using the dce \ dcr. $staticData = @" [{ "TimeGenerated": "$currentTime", "RawData": "$raw", }]"@; $body = $staticData; $headers = @{"Authorization"="Bearer $bearerToken";"Content-Type"="application/json"}; $uri = "$endpoint_uri/dataCollectionRules/$dcrImmutableId/streams/$($streamName)?api-version=2023-01-01" $rest = Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers When the data arrives in log analytics workspace, you can query this data, and you can create a dashboard or why not an alert. Now you will see how you can implement this solution. For the log analytics, dce and dcr, you can follow the official docs: Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Resource Manager templates) - Azure Monitor | Microsoft Learn After you create the dcr and the log analytics workspace with its custom table. You can proceed with the Automation account. Create an automation account using the creating wizard You can proceed with the default parameter. When the Automation Account creation is completed, you can create a credential in the Automation Account. This allows you to avoid the exposition of the credential used to connect to Azure You can insert here the enterprise application and the key. Now you are ready to create the runbook (basically the script that we will schedule) You can give the name you want and click create. Now go in the automation account than Runbooks and Edit in Portal, you can copy your script or the script in this link. Remember to replace your tenant ID, you will find in Entra ID section and the Enterprise application You can test it using the Test Pane function and when you are ready you can Publish and link a schedule, for example daily at 5am. Remember, today we talked about database permissions, but the scenarios are endless: checking a requirement, deploying a small fix, or removing/adding a configuration — at scale. At the end, as you see, Azure Arc is not only another agent, is a chance to empower every environment (and every other cloud provider 😉) with Azure technology. See you in the next techie adventure. **Disclaimer** The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.SQL Timestamp Incremental Load Issue
An incremental load is implemented based on timestamp, using a view to update a fact table. • The fact table contains multiple KPIs coming from different source tables. • Two main KPI tables are included directly in the view and drive the incremental logic (their timestamps change). • Additional KPIs come from other source tables, which are LEFT JOINed to the view. During an incremental run: • Main source tables are updated → timestamp changes • Other source tables are NOT updated → timestamp unchanged • Because of the LEFT JOIN, missing values from these tables are returned as NULL • These NULLs overwrite existing values in the fact table Example Fact table (before load) id app_number score 1 333 5 Source tables • source1, source2 → timestamps updated (drive incremental load) • source3 → timestamp unchanged Stored procedure logic MERGE fact_table tgt USING ( SELECT app_number, ISNULL(score, 0) AS score FROM vw_main_kpis v LEFT JOIN source3 s3 ON v.app_number = s3.app_number ) src ON tgt.app_number = src.app_number WHEN MATCHED THEN UPDATE SET tgt.score = src.score; Result (incorrect) id app_number score 1 333 0 Existing data is lost.139Views0likes2Comments"Invalid Managed Identity" error in vulnerability assessments for SQL Managed Instance
On a recent case, a customer received the error "Invalid Managed Identity" when trying to scan a database. The error suggests that there is an issue with the managed identity or the RBAC permissions required, but the same error occurs when there is a connectivity issue between SQL Managed Instance and the storage account selected. Checking RBAC permissions The RBAC permissions can be manually checked in Azure Portal, or you can use the below script in Azure CLI, providing the resource details. #SQL Managed Instance Details $serverResourceGroup = '' $serverName = '' #Storage Account Details $storageAccountResourceGroup = '' $storageAccount = '' ############################################################################################################################## $sqlIdentity = ((az sql mi show -g $serverResourceGroup -n $serverName | ConvertFrom-Json).Identity).principalId $storageId = (az storage account show -g $storageAccountResourceGroup -n $storageAccount | ConvertFrom-Json).id $permissions = $NULL $permissions = az role assignment list --all --assignee $sqlIdentity | ConvertFrom-Json | Where-Object {$_.scope -eq $storageId -and $_.roleDefinitionName -eq 'Storage Blob Data Contributor'} if ($permissions -eq $NULL) {Write-Host "RBAC permissions do not exist"} else {Write-Host "RBAC Permissions exist"} It will return a simple message to confirm if the permissions exist. Connectivity issues If the permissions do exist, then it may be due to connectivity issues between SQL Managed Instance and the storage account. Listed below are ways to check this. Storage account networking configuration The storage account can be configured to allow the following access: Public - All Public - Selected networks Private If the access is set to Selected Networks, make sure the SQL Managed Instance subnet is in the list. If the access is private only, then the SQL Managed Instance would need to be able to resolve the Private IP in DNS. NSG/Firewall rules and routing Check that there are no rules blocking connections between each resource and that the routing is configured correctly. DNS settings If the DNS settings are custom or a private endpoint is being used, the DNS configuration for the SQL Managed Instance virtual network may need to be configured, for example, adding a private DNS zone. Network Peering If the connectivity is through a private endpoint in a different virtual network, check that there is communication between them, such as network peering. Disclaimer Please note that products and options presented in this article are subject to change. This article reflects the documentation in January 2026. I hope this article was helpful for you, please feel free to share your feedback in the comments section.218Views0likes0CommentsSQL Server 2005 (compatibility level 90)
Hello, I’m testing the behavior described in the SQL Server documentation for **compatibility level 90** regarding the special attributes `xsi:nil` and `xsi:type`: > “The special attributes `xsi:nil` and `xsi:type` can't be queried or modified by data manipulation language statements. This means that `/e/@xsi:nil` fails while `/e/@*` ignores the `xsi:nil` and `xsi:type` attributes. However, `/e` returns the `xsi:nil` and `xsi:type` attributes for consistency with `SELECT xmlCol`, even if `xsi:nil = "false"`. ” But on **SQL Server 2005**, I can successfully query `@xsi:nil` and it returns the expected value. I’m trying to reproduce the documented “`/e/@xsi:nil` fails” behavior, but I can’t. ### Environment - Product: **Microsoft SQL Server 2005** - Database compatibility level: **90** --- ## ✅ Repro script ```sql IF EXISTS (SELECT * FROM sys.xml_schema_collections WHERE name = 'MyTestSchema') DROP XML SCHEMA COLLECTION MyTestSchema; GO CREATE XML SCHEMA COLLECTION MyTestSchema AS N' <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <xsd:element name="root"> <xsd:complexType> <xsd:sequence> <xsd:element name="element" nillable="true" type="xsd:string" /> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:schema>'; GO DECLARE @xmlData XML(MyTestSchema) = N' <root xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <element xsi:nil="true" /> </root>'; ;WITH XMLNAMESPACES ('http://www.w3.org/2001/XMLSchema-instance' as xsi) SELECT @xmlData.query('<result> { /root/element/@xsi:nil } </result>') AS Typed_Result; ``` ### Actual result `Typed_Result` contains `xsi:nil="true"` under `<result>...`. ### Expected result (based on docs) I expected `/root/element/@xsi:nil` to fail, or not return `xsi:nil`. --- ## Questions 1. In the documentation, does “data manipulation language statements” mean only **XML DML** (i.e., `.modify()`), not XQuery used in `SELECT` with `.query()` / `.value()`? 2. Does the “`/e/@xsi:nil` fails” behavior apply only when the XML is stored in a **table column**, not when using an **XML variable**? 3. Is the behavior different between **typed XML** (with an XML schema collection) vs **untyped XML**? 4. Can someone provide a minimal reproduction in SQL Server 2005 where `/e/@xsi:nil` fails as described? Thank you. ---59Views0likes1Comment