Azure SQL DB
72 TopicsWhat would be the expected behavior for an NSP?
I'm using a network security perimeter in Azure. In the perimeter there are two resources assigned: A storage Account and An Azure SQL Databse. I'm using the BULK INSERT dbo.YourTable FROM 'sample_data.csv' getting data from the storage account. The NSP is enforced for both resources, so the public connectivity is denied for resources outside the perimeter I have experienced this behavior: the azure SQL CANNOT access the storage account when I run the command. I resolved using: I need to add an outbound rule in the NSP to reach the storage fqdn I need to add an inbound rule in the NSP to allow the public IP of the SQL Azure When I do 1 and 2, azure SQL is able to pump data from the storage. IMHO this is not the expected behavior for two resources in the NSP. I expect that, as they are in the same NSP, they can communicate to each other. I have experienced a different behavior when using keyvault in the same NSP. I'm using the keyvault to get the keys for encryption for the same storage. For the key vault, i didn't have to create any rule to make it able to communicate to the storage, as they are in the same NSP. I know, Azure SQL is in preview for the NSP and the keyvault in GA, but I want to ask if the experienced behavior (the SQL CANNOT connect to the storage even if in the same NSP) is due to a unstable or unimplemented feature, or I'm missing something? What is the expected behavior? Thank you community!!34Views0likes1CommentRemoving barriers to migrating databases to Azure with Striim’s Unlimited Database Migration program
Alok Pareek, co-founder and Executive Vice President of Product and Engineering at Striim Shireesh Thota, Corporate Vice President of Databases at Microsoft Every modernization strategy starts with data. It’s what enables advanced analytics and AI agents today, and prepares enterprises for what’s to come in the future. But before services like Microsoft Fabric, Azure AI Foundry, or Copilot can create that value, the underlying data needs to move into Microsoft’s cloud platforms. It’s within that first step, database migration, where the real complexity often lies. To simplify the process, Microsoft has expanded its investment in the Striim partnership. Striim continuously replicates data from existing databases into Azure in real time, enabling online migrations with zero downtime. Through this partnership, we have collaborated to enable modernization and migration into Azure at no additional cost to our customers. We’ve designed this Unlimited Database Migration program to accelerate adoption by making migrations easier to start, easier to scale, and easier to complete, all without disrupting business operations. Since launch, this joint program has already driven significant growth in customer adoption, indicating the demand for faster, more seamless modernization. And with Microsoft’s continued investment in this partnership, enterprises now have a proven, repeatable path to modernize their databases and prepare their data for the AI era. Watch or listen to our recent podcast episode (Apple Podcasts, Spotify, YouTube) to learn more. Striim’s Unlimited Migration Program Striim’s Unlimited Database Migration Program was designed to make modernization as straightforward as possible for Microsoft customers. Through this initiative, enterprises gain unlimited Striim licenses to migrate as many databases as they need at no additional cost. Highlights and benefits of the program include: Zero-downtime, zero-data-loss migrations. Supported sources include SQL Server, MongoDB, Oracle, MySQL, PostgreSQL, and Sybase. Supported targets include Azure Database for MySQL, Azure Database for PostgreSQL, Azure Cosmos DB, and Azure SQL. Mission-critical, heterogeneous workloads supported. Applies for SQL, Oracle, NoSQL, OSS. Drives faster AI adoption. Once migrated, data is ready for analytics & AI. Access is streamlined through Microsoft’s Cloud Factory Accelerator team, which manages program enrollment and coordinates the distribution of licenses. Once onboarded, customers receive installation walkthroughs, an enablement kit, and direct support from Striim architects. Cutover support, hands-on labs, and escalation paths are all built in to help migrations run smoothly from start to finish. Enterprises can start migrations quickly, scale across business units, and keep projects moving without slowing down for procurement hurdles. Now, migrations can begin when the business is ready, not when budgets or contracts catch up. How Striim Powers Online Migrations Within Striim’s database migrations, schema changes and metadata evolution are automatically detected and applied, preserving data accuracy and referential integrity. As the migration progresses, Striim automatically coordinates both the initial bulk load of historical data and the ongoing synchronization of live transactions. This ongoing synchronization keeps source and target systems in sync for as long as needed to actively test the target applications with real data before doing the cutoff, thereby minimizing risk. However, the foundation of Striim’s approach is log-based Change Data Capture (CDC), which streams database changes in real time from source to target with sub-second latency. This helps migrations avoid just moving the static snapshot of a database. Rather, they continuously replicate every update as it happens, so both environments remain aligned with minimal impact on operational systems throughout the process. While the snapshot (initial load) is being applied to the target system, Striim captures all the changes that occur. Once the initial load process is complete, Striim applies the changes using CDC, and from this point on, the source and target systems are in sync. This eliminates the need for shutting down the source system during the initial load process and enables customers to complete their migrations without any downtime of the source database. Striim is also designed to work across hybrid and multi-cloud architectures. It can seamlessly move workloads from on-premises databases, SaaS applications, or other clouds into Microsoft databases. By maintaining exactly-once delivery and ensuring downstream systems stay in sync, Striim can reduce risk and accelerates the path to modernization. Striim is available in the Azure Marketplace, giving customers a native, supported way to integrate it directly into their Azure environment. This means migrations can be deployed quickly, governed centrally, and scaled as business needs evolve, all while still aligning with Azure’s security and compliance standards. From Migration to Value With workloads fully landed in Azure, enterprises can immediately take advantage of the broader Microsoft data ecosystem. Fabric, Azure AI Foundry, and Copilot become available as extensions of the database foundation, allowing teams to analyze, visualize, and enrich data without delay. Enterprises can begin adopting Microsoft AI services with data that is current, trusted, and governed. Instead of treating migration as an isolated project, customers gain an integrated pathway to analytics and AI, creating value as soon as databases go live in Azure. How Enterprises Are Using the Program Today Across industries, we’re already seeing how this program changes the way enterprises approach modernization. Financial Services Moving from Oracle to Azure SQL, one global bank used Striim to keep systems in sync throughout the migration. With transactions flowing in real time, they stood up a modern fraud detection pipeline on Azure that identifies risks as they happen. Logistics For a logistics provider, shifting package-tracking data from MongoDB to Azure Cosmos DB meant customers could monitor shipments in real time. Striim’s continuous replication kept data consistent throughout the cutover, so the company didn’t have to trade accuracy for speed. Healthcare A provider modernizing electronic medical records from Sybase to Azure SQL relied on Striim to ensure clinicians never lost access. With data now in Azure, they can meet compliance requirements while building analytics that improve patient care. Technology InfoCert, a leading provider of digital trust services specializing in secure digital identity solutions, opted to migrate its critical Legalmail Enterprise application from Oracle to Azure Database for PostgreSQL. Using Striim and Microsoft, they successfully migrated 2 TB of data across 12 databases and completed the project within a six-month timeframe, lowering licensing costs, enhancing scalability, and improving security. What unites these stories is a common thread: once data is in Azure, it becomes part of a foundation that’s ready for analytics and AI. Accelerate Your Path to Azure Now, instead of database migration being the bottleneck for modernization, it’s the starting point for what comes next. With the Unlimited Database Migration Program, Microsoft and Striim have created a path that removes friction and clears the way for innovation. Most customers can simply reach out to their Microsoft account team or seller to begin the process. Your Microsoft representative will validate that your migration scenario is supported by Striim, and Striim will allocate the licenses, provide installation guidance, and deliver ongoing support. If you’re unsure who your Microsoft contact is, you can connect directly with Striim, and we’ll coordinate with Microsoft on your behalf. There’s no lengthy procurement cycle or complex setup to navigate. With Microsoft and Striim jointly coordinating the program, enterprises can begin migrations as soon as they’re ready, with confidence that support is in place from start to finish. Simplify your migration and move forward with confidence. Talk to your Microsoft representative or book a call with Striim team today to take advantage of the Unlimited Database Migration Program and start realizing the value of Azure sooner. Or if you’re attending Microsoft Ignite, visit Striim at booth 6244 to learn more, ask questions, and see how Striim and Microsoft can help accelerate your modernization journey together.
General Availability - DMS's PowerShell, Azure CLI, and Python SDK
We’re excited to announce the General Availability (GA) of DMS client tools - PowerShell, Azure CLI, Python SDK and more. This milestone unlocks efficient, stable, and scalable automation options for database migration workflows—making it easier than ever to integrate DMS into your DevOps pipelines and enterprise migration strategies. 💡Introduction: With the general availability of DMS client tools - PowerShell, Azure CLI, Python SDK, users can now use stable release of: PowerShell module 1.0.0 (https://www.powershellgallery.com/packages/Az.DataMigration/1.0.0) Azure CLI extension 1.0.0 (https://learn.microsoft.com/en-us/cli/azure/datamigration?view=azure-cli-latest) DMS V2 APIs (version 2025-06-30) SDKs for multiple languages (listed below) SDKs Releases: Language GA Package / Link .Net https://www.nuget.org/packages/Azure.ResourceManager.DataMigration/1.0.0 Java https://central.sonatype.com/artifact/com.azure.resourcemanager/azure-resourcemanager-datamigration/1.1.0 Go https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/datamigration/armdatamigration/v2 Python azure-mgmt-datamigration · PyPI JavaScript https://www.npmjs.com/package/@azure/arm-datamigration/v/3.0.0 🔧 What’s New? Three new commands have been introduced in the latest releases of the SDK, PowerShell module, and CLI extension, as outlined below: New CLI Commands: az datamigration sql-db retry - Retry the failed SQL DB migrations. az datamigration sql-managed-instance delete - Delete Azure SQL MI’s Database Migration resource. az datamigration sql-vm delete - Delete Azure SQL VM’s Database Migration resource. New PowerShell Commands: Invoke-AzDataMigrationRetryToSqlDb - Retry the failed SQL DB migrations. Remove-AzDataMigrationToSqlManagedInstance - Delete Azure SQL MI’s Database Migration resource. Remove-AzDataMigrationToSqlVM - Delete Azure SQL VM’s Database Migration resource. 🚀Conclusion: With this GA / stable release, users can now: Use them to configure and execute migrations with full control. Automate migrations: DevOps teams can embed migration steps into CI/CD pipelines. Integrate into custom applications and orchestration tools. These support all the DMS migration scenarios—from simple lift-and-shift operations to complex logical migrations—while ensuring stability, and repeatability. For more details, refer: Documentation: Migrate databases at scale using Azure PowerShell / CLI PowerShell: Az.DataMigration Module Azure CLI: az datamigration Python SDK: azure-mgmt-datamigration · PyPI262Views2likes0CommentsLessons Learned #535: BACPAC Import Failures in Azure SQL Database Due to Incompatible Users
We recently worked on a support case where a customer was trying to import a BACPAC file, generated on a different server and subscription, into their Azure SQL Database. The process kept failing with the following errors: "Could not import package. Error SQL72014: Framework Microsoft SqlClient Data Provider: Msg 33159 - Only connections established with Active Directory accounts can create other Active Directory user" At first glance, this looked like a permissions issue, but digging deeper we realized that the error was triggered when the import process tried to create Entra ID (Azure AD) users while the connection was being made with a SQL Login, We checked several things in the BACPAC: The BACPAC contained references to external Active Directory users that were valid in the source environment but not in the target. Both the Azure portal and SQL Server Management Studio (SSMS) failed with the same error. Since BACPAC files include both schema and user objects, incompatible users were being carried over and breaking the import. After thorough investigation, the following resolution path was established: We created a dummy copy of the source database. We removed the external AD/Entra users from that copy. We generated a new BACPAC from this cleaned database. We imported it into the target Azure SQL Database — and this time it worked. We explained several details: BACPAC files included both schema and security objects, including users. If external Active Directory users are not present in the target environment can cause import failures. Before exporting, review and remove or adjust user objects to avoid this issue — particularly when migrating across subscriptions, servers, or organizations with different Azure AD tenants.140Views0likes0CommentsLesson Learned #531: Scalar UDF vs Parallelism
Last week I worked on a support case where our customer reported that the exact same query, executed against two identical databases with the same resources, was taking significantly longer on one of them. Both databases had the same number of rows, up-to-date statistics, and identical indexes. We started by collecting the execution plans, and I’d like to share what we found. Comparing both execution plans, in the XML of the execution plan that is taking more time, we found the following line in <QueryPlan DegreeOfParallelism="0" NonParallelPlanReason="TSQLUserDefinedFunctionsNotParallelizable"> However in the XML of execution plan that is taking less time we found <QueryPlan DegreeOfParallelism="1" ContainsInlineScalarTsqlUdfs="true"> So, based on this difference, it is clear that the query is using a Scalar UDF but in one of the database, based on the definition of this Scalar UDF function is not possible to run the query in parallel. But in the other database even using Scalar UDF it is possible. As both databases are using the same compatibility level of 160, we started to analyze what is different on both that leads to this behavior, sharing with you an example. DROP TABLE IF EXISTS dbo.TestData; GO CREATE TABLE dbo.TestData ( ID INT IDENTITY(1,1) PRIMARY KEY, Value1 INT, Value2 INT ); INSERT INTO dbo.TestData (Value1, Value2) SELECT ABS(CHECKSUM(NEWID()) % 10000), ABS(CHECKSUM(NEWID()) % 10000) FROM sys.all_objects a CROSS JOIN sys.all_objects b WHERE a.object_id < 150 AND b.object_id < 150; Let's create the Scalar function that blocks the parallel execution. CREATE OR ALTER FUNCTION dbo.fn_BlockParallel (@v1 INT) RETURNS INT AS BEGIN DECLARE @x INT; SELECT @x = DATEDIFF(MILLISECOND, GETDATE(), SYSDATETIME()); RETURN ISNULL(@x, 0); END; When I executed the following query I see in the XML file the following - <QueryPlan DegreeOfParallelism="0" NonParallelPlanReason="TSQLUserDefinedFunctionsNotParallelizable" CachedPlanSize="16" CompileTime="1" CompileCPU="1" CompileMemory="216"> SELECT ID, dbo.fn_BlockParallel(Value1) FROM dbo.TestData WHERE Value1 > 100 OPTION (MAXDOP 4); GO If I modified the code for a new Scalar UDF, I see: <QueryPlan DegreeOfParallelism="1" CachedPlanSize="16" CompileTime="1" CompileCPU="1" CompileMemory="272" ContainsInlineScalarTsqlUdfs="true"> CREATE OR ALTER FUNCTION dbo.fn_BlockParallel (@v1 INT) RETURNS INT AS BEGIN DECLARE @x INT; SELECT @x = v1 * 2; RETURN @x; END; So, even when using compatibility level 160, certain constructs inside scalar UDFs can prevent inlining, which in turn blocks query parallelism. When performance varies between environments, one of the things to check is whether scalar UDFs are involved, and if they are eligible for inlining. To detect the issue quickly, look at the execution plan XML and check the attributes DegreeOfParallelism, ContainsInlineScalarTsqlUdfs, and NonParallelPlanReason.Lesson Learned #533: Intermittent Azure SQL Database Connectivity and Authentication Issues
While working on a recent service request, we helped a customer troubleshoot intermittent connection and authentication failures when accessing Azure SQL Database using Active Directory (Entra ID) authentication from a Java-based application using HikariCP with JDBC/ODBC. They got the following error: com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: Failed to authenticate.. Request was throttled according to instructions from STS. Retry in 29701 ms. java.sql.SQLTransientConnectionException: HikariPool-application1 - Connection is not available, request timed out after The first insight was focusing in the error message: Request was throttled according to instructions from STS. Retry in 29701 ms. This message seems it is returned by the Azure Active Directory Security Token Service (STS) when the client is sending too many token requests in a short period of time, exceeding the allowed threshold. We don't have all the details about, but, in high-concurrency environments (e.g., multiple threads, large connection pool) causes each thread to independently request a new token and we could reach a limit in this service, even, if the connection pool retries frequently or fails authentication, the number of token requests can spike. This is the reason, that HikariCP tries to initialize or refresh connections quickly, as many threads attempt to connect at once, and all trigger token requests simultaneously, STS throttling is reached. In order to avoid this situation, could be different topics, like, ensure our application caches tokens and reuses them across threads, using Managed Identity, increase the retry after delay, or perhaps, depending on HikariCP configuration, pre-warm connections gradually. Of course, discuss with your EntraID administration is other option.Lessons Learned #534: Azure SQL Database Connections with Managed Identity and Python ODBC
We worked on a service request that our customer trying to enable their Python application, hosted on Azure App Service, to connect securely to Azure SQL Database using a user-assigned managed identity. They attempted to use the Microsoft ODBC Driver for SQL Server with the managed identity for authentication. During our troubleshooting process we found several issues/error messages causing by an incorrect settings in the connection string: The initial connection string used the 'ActiveDirectoryInteractive' authentication method, which is not compatible with managed identities. The correct approach is to use 'ActiveDirectoryMsi' or 'ActiveDirectoryManagedIdentity' for system/user-assigned managed identities. Switching to 'ActiveDirectoryMsi' led to a pyodbc error: pyodbc.Error: (FA005, [FA005] [Microsoft][ODBC Driver 18 for SQL Server]Cannot use Access Token with any of the following options: Authentication, Integrated Security, User, Password. (0) (SQLDriverConnect)). The FA005 error message indicated a mismatch between the use of an access token and the connection string properties. Specifically, when passing an access token, the connection string must not include conflicting authentication parameters such as User, Password, or Integrated Security.194Views0likes0CommentsLesson Learned #532:Power BI Refresh Failure Due to Connection Pool Exhaustion in Azure SQL Database
We've been working on a service request that a customer experienced frequent failures when refreshing Power BI reports connected to an Azure SQL Database. The error message indicated a problem with acquiring connections from the data source pool, leading to unsuccessful report refreshes. We found the following error message: A request for a connection from the data source pool could not be granted. Retrying the evaluation may solve the issue. The exception was raised by the IDbCommand interface. Analyzing the details of the case, we found that the issue occurred regardless of whether Entra ID or SQL authentication was used, we don't have issue at Azure SQL Database level and not login error, but, we identified a high number of simultaneous connection attempts from Power BI to Azure SQL Database. We also reviewed the configuration of Power BI Desktop and noted that it loads multiple tables in parallel during refresh operations. This behavior triggers a surge of concurrent connections to the database, which in this scenario resulted in exhaustion of the connection pool at the application layer. We suggested to reduce the parallel table loading setting in Power BI Desktop, using File > Options and settings > Options > Data Load and Under Parallel loading of tables. Later adjusted the setting to a higher value to find a balance between performance and stability.114Views0likes0CommentsLesson Learned #530: Comparing Execution Plans to Expose a Hidden Performance Anti-Pattern
One of the most powerful features of SSMS Copilot is how it lets you compare execution plans and immediately show you performance issues. In this case, I would like to share with you my lesson learned comparing two queries and how they behave very differently inside the engine. We have the following queries, these are using a table _x_y_z_MS_HighCPU that contains 4 millon of rows. The column TextToSearch is a varchar(200) datatype. -- Query 1 SELECT COUNT(*) FROM [MSxyzTest].[_x_y_z_MS_HighCPU] WHERE TextToSearch = N'Value: 9'; -- Query 2 SELECT COUNT(*) FROM [MSxyzTest].[_x_y_z_MS_HighCPU] WHERE TextToSearch = 'Value: 9'; Since the query texts are different, each will have a different query ID in Query Store. By running the following T-SQL, for example, I can identify the query IDs. SELECT qsqt.query_sql_text, qsq.query_id, qsp.plan_id, qsp.query_plan_hash, qsp.last_execution_time FROM sys.query_store_query_text qsqt JOIN sys.query_store_query qsq ON qsqt.query_text_id = qsq.query_text_id JOIN sys.query_store_plan qsp ON qsq.query_id = qsp.query_id WHERE qsqt.query_sql_text LIKE '%SELECT COUNT(*)%' -- FROM [[MSxyzTest]].[[_x_y_z_MS_HighCPU]]%' ORDER BY qsp.last_execution_time DESC; Queries 1 and 2 can be compared directly. Using Copilot, I ran the following prompt: Compare the execution plans for the two queries (query id 1 and query id 2 using Query Store. Highlight any differences in operators, estimated vs actual row counts, or implicit conversions. Running the following prompt : CPU Usage: Please, show the top resource-consuming queries in the current database using Query Store data. Include query text, execution count, duration, CPU time, and logical reads. We could see the impact of using an antipattern:172Views0likes0CommentsLesson Learned #528: Arithmetic overflow error converting IDENTITY to data type int
Some days ago, we were working on a service request where our customer was experiencing an issue with their application while reading a CSV file from Azure Blob Storage and writing the data into an Azure SQL Database table. They discovered that thousands of rows were being skipped. While analyzing the issue, we identified the following error message: Arithmetic overflow error converting IDENTITY to data type int. After a investigation, we found the root cause: The target table in production had an INT-typed IDENTITY column. The identity value had reached the maximum value of an INT (2,147,483,647). The application was configured with SET IDENTITY_INSERT ON, so it was trying to explicitly insert identity values from the CSV file. When it attempted to insert a value above the INT limit, it failed with an arithmetic overflow. How to reproduce the issue in SQL: CREATE TABLE dbo.TestDemo ( ID INT IDENTITY(1,1) PRIMARY KEY, Name NVARCHAR(100) ); -- Set the identity value to the maximum possible value for INT -- Only for testing... DBCC CHECKIDENT ('dbo.TestDemo', RESEED, 2147483646); SET IDENTITY_INSERT dbo.TestDemo ON; INSERT INTO dbo.TestDemo (ID, Name) VALUES (2147483647, 'Last Row'); INSERT INTO dbo.TestDemo (ID, Name) VALUES (2147483648, 'Overflow'); SET IDENTITY_INSERT dbo.TestDemo OFF;193Views0likes0Comments