azure sql
701 TopicsWindows Authentication for Cloud-Native Identities: Modernizing Azure SQL Managed Instance (Preview)
Organizations moving to the cloud often face a critical challenge: maintaining seamless authentication for legacy applications without compromising security or user experience. Today, we’re excited to announce support for Windows Authentication for Microsoft Entra principals on Azure SQL Managed Instance, enabling cloud-native identities to authenticate using familiar Windows credentials. Why This Matters Traditionally, Windows Authentication relied on on-premises Active Directory, making it difficult for businesses adopting a cloud-only strategy to preserve existing authentication models. With this new capability: Hybrid Identity Support: Users synchronized between on-premises AD DS and Microsoft Entra ID can continue using a single set of credentials for both environments. Cloud-Only Identity (Preview): Identities that exist only in Microsoft Entra ID can now leverage Kerberos-based Windows Authentication for workloads like Azure SQL Managed Instance—without requiring domain controllers. This means organizations can modernize infrastructure while maintaining compatibility with legacy apps, reducing friction during migration. Key Benefits Seamless Migration: Move legacy applications to Azure SQL Managed Instance without rewriting authentication logic. Passwordless Security: Combine Windows Authentication with modern credentials like Windows Hello for Business or FIDO2 keys, enabling MFA and reducing password-related risks. Cloud-Native Integration: Microsoft Entra Kerberos acts as a cloud-based Key Distribution Center (KDC), issuing Kerberos tickets for cloud resources such as Azure SQL Managed Instance and Azure Files Breaking Barriers to Cloud Migration Many enterprises hesitate to migrate legacy apps because they depend on Windows Authentication. By extending this capability to cloud-native identities, we remove a major barrier—allowing customers to modernize at their own pace while leveraging familiar authentication models. Learn More https://learn.microsoft.com/en-us/azure/azure-sql/managed-instance/winauth-azuread-overview?view=azuresql Microsoft Entra Kerberos Overview270Views0likes0CommentsGenerally Available: Azure SQL Managed Instance Next-gen General Purpose
Overview Next-gen General Purpose is the evolution of General Purpose service tier that brings significantly improved performance and scalability to power up your existing Azure SQL Managed Instance fleet and helps you bring more mission-critical SQL workloads to Azure. We are happy to announce that Next-gen General Purpose is now Generally Available (GA) delivering even more scalability, flexibility, and value for organizations looking to modernize their data platform in a cost-effective way. The new #SQLMINextGen General Purpose tier delivers a built-in performance upgrade available to all customers at no extra cost. If you are an existing SQL MI General Purpose user, you get faster I/O, higher database density, and expanded storage - automatically. Summary Table: Key Improvements Capability Current GP Next-gen GP Improvement Average I/O Latency 5-10 ms 3-4 ms 2x lower Max Data IOPS 30-50k 80k 60% better Max Storage 16 TB 32 TB 2x better Max Databases/Instance 100 500 5x better Max vCores 80 128 40% better But that’s just the beginning. The new configuration sliders for additional IOPS and memory provide enhanced flexibility to tailor performance according to your requirements. Whether you require more resources for your application or seek to optimize resource utilization, you can adjust your instance settings to maximize efficiency and output. This release isn’t just about speed - It’s about giving you improved performance where it matters, and mechanisms to go further when you need them. Customer story - A recent customer case highlights how Hexure reduced processing time by up to 97.2% using Azure SQL Managed Instance on Next-gen General Purpose. What’s new in Next-gen General Purpose (Nov 2025)? 1. Improved baseline performance with the latest storage tech Azure SQL Managed Instance is built on Intel® Xeon® processors, ensuring a strong foundation for enterprise workloads. With the next-generation General Purpose tier, we’ve paired Intel’s proven compute power with advanced storage technology to deliver faster performance, greater scalability, and enhanced flexibility - helping you run more efficiently and adapt to growing business needs. The SQL Managed Instance General Purpose tier is designed with full separation of compute and storage layers. The Classic GP version uses premium page blobs for the storage layer, while the Next-generation GP tier has transitioned to Azure’s latest storage solution, Elastic SAN. Azure Elastic SAN is a cloud-native storage service that offers high performance and excellent scalability, making it a perfect fit for the storage layer of a data-intensive PaaS service like Azure SQL Managed Instance. Simplified Performance Management With ESAN as the storage layer, the performance quotas for the Next-gen General Purpose tier are no longer enforced for each database file. The entire performance quota for the instance is shared across all the database files, making performance management much easier (one fewer thing to worry about). This adjustment brings the General Purpose tier into alignment with the Business Critical service tier experience. 2. Resource flexibility and cost optimization The GA of Next-gen General Purpose comes together with the GA of a transformative memory slider, enabling up to 49 memory configurations per instance. This lets you right-size workloads for both performance and cost. Memory is billed only for the additional amount beyond the default allocation. Users can independently configure vCores, memory, and IOPS for optimal efficiency. To learn more about the new option for configuring additional memory, check the article: Unlocking More Power with Flexible Memory in Azure SQL Managed Instance. 3. Enhanced resource elasticity through decoupled compute and storage scaling operations With Next-gen GP, both storage and IOPS can be resized independently of the compute infrastructure, and these changes now typically finish within five minutes - a process known as an in-place upgrade. There are three distinct types of storage upgrade experiences depending on the kind of storage upgrade performed and whether failover occurs. In-place update: same storage (no data copy), same compute (no failover) Storage re-attach: Same storage (no data copy), changed compute (with failover) Data copy: Changed storage (data copy), changed compute (with failover) The following matrix describes user experience with management operations: Operation Data copying Failover Storage upgrade type IOPS scaling No No In-place Storage scaling* No* No In-place vCores scaling No Yes** Re-attach Memory scaling No Yes** Re-attach Maintenance Window change No Yes** Re-attach Hardware change No Yes** Re-attach Update policy change Yes Yes Data copy * If scale down is >5.5TB, seeding ** In case of update operations that do not require seeding and are not completed in place (examples are scaling vCores, scaling memory, changing hardware or maintenance window), failover duration of databases on the Next-gen General Purpose service tier scales with the number of databases, up to 10 minutes. While the instance becomes available after 2 minutes, some databases might be available after a delay. Failover duration is measured from the moment when the first database goes offline, until the moment when the last database comes online. Furthermore, resizing vCores and memory is now 50% faster following the introduction of the Faster scaling operations release. No matter if you have end-of-month peak periods, or there are ups and downs of usage during the weekdays and the weekend, with fast and reliable management operations, you can run multiple configurations over your instance and respond to peak usage periods in a cost-effective way. 4. Reserved instance (RI) pricing With Azure Reservations, you can commit to using Azure SQL resources for either one or three years, which lets you benefit from substantial discounts on compute costs. When purchasing a reservation, you'll need to choose the Azure region, deployment type, performance tier, and reservation term. Reservations are only available for products that have reached general availability (GA), and with this update, next-generation GP instances now qualify as well. What's even better is that classic and next-gen GP share the same SKU, just with different remote storage types. This means any reservations you've purchased automatically apply to Next-gen GP, whether you're upgrading an existing classic GP instance or creating a new one. What’s Next? The product group has received considerable positive feedback and welcomes continued input. The initial release will not include zonal redundancy; however, efforts are underway to address this limitation. Next-generation General Purpose (GP) represents the future of the service tier, and all existing classic GP instances will be upgraded accordingly. Once upgrade plans are finalized, we will provide timely communication regarding the announcement. Conclusion Now in GA, Next-gen General Purpose sets a new standard for cloud database performance and flexibility. Whether you’re modernizing legacy applications, consolidating workloads, or building for the future, these enhancements put more power, scalability, and control in your hands - without breaking the bank. If you haven’t already, try out the Next-gen General Purpose capabilities for free with Azure SQL Managed Instance free offer. For users operating SQL Managed Instance on the General Purpose tier, it is recommended to consider upgrading existing instances to leverage the advantages of next-gen upgrade – for free. Welcome to #SQLMINextGen. Boosted by default. Tuned by you. Learn more What is Azure SQL Managed Instance Try Azure SQL Managed Instance for free Next-gen General Purpose – official documentation Analyzing the Economic Benefits of Microsoft Azure SQL Managed Instance How 3 customers are driving change with migration to Azure SQL Accelerate SQL Server Migration to Azure with Azure Arc2.1KViews5likes0CommentsIs practice Labs Enough for the AZ-305 Exam?
Hello everyone, Just a quick question — how should I best prepare for the AZ-305 exam? Is retaking the Learning Path quizzes enough, or should I also practice with other types of tests? Any advice would be greatly appreciated. Thanks in advance!63Views1like1CommentStep-by-Step Guide: Route Azure SQL Audit Logs to Multiple Log Analytics Workspaces
Scenario: Many organizations need to route audit logs from Azure SQL Database to more than one Log Analytics workspace. For example, your security team may use Microsoft Sentinel in one workspace, while your application team analyzes logs in another. Azure now makes this possible—here’s how to set it up, and what to watch out for. Why Send Audit Logs to Multiple Workspaces? Separation of Duties: Security and application teams can access the logs they need, independently. Integration with Different Tools: Sentinel may use one workspace for SIEM, while app teams use another for analytics. Compliance and Regional Needs: Some organizations must store logs in different regions or workspaces for regulatory reasons. Step-by-Step Guide Enable Auditing to Log Analytics Workspace Go to your Azure SQL Server in the Azure Portal. Under Security, select Auditing. Set the audit destination to your primary Log Analytics workspace, Click Save. Tip: Enabling auditing here automatically creates a diagnostic setting for the selected workspace. Add Diagnostic Settings for Additional Workspaces In azure portal search for Diagnostic settings. Search for your subscription and master database of SQL Server to create diagnostics setting at server level Click + Add diagnostic setting. Name your setting (e.g., “AuditToAppWorkspace”). Under Log, select audit, select SQLSecurityAuditEvents (uncheck “DevOpsAudit” if not needed). Choose an additional Log Analytics workspace as the destination. Click Save. create new setting Note: You can repeat this step to send audit logs to as many workspaces as needed. Example Use Case A customer uses: Workspace A for Microsoft Sentinel (security monitoring) Workspace B for application analytics By configuring multiple diagnostic settings, both teams receive the audit data they need—no manual exports required. Summary Configuring multiple diagnostic settings allows you to send Azure SQL Database audit logs to several Log Analytics workspaces. This is essential for organizations with different teams or compliance needs. Remember: Enable auditing first Add diagnostic settings for each workspace Monitor for cost and avoid duplicate logs References: https://learn.microsoft.com/en-us/azure/azure-sql/database/auditing https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/diagnostic-settings193Views0likes0CommentsABORT_QUERY_EXECUTION query hint - public preview
We are pleased to announce the public preview of a new query hint, ABORT_QUERY_EXECUTION. The hint is intended to be used as a Query Store hint to let administrators block future execution of known problematic queries, for example non-essential queries causing high resource consumption and affecting application workloads. The hint is now available in Azure SQL Database for all databases without restrictions. The hint will later be available in Azure SQL Managed Instance with the always-up-to-date update policy, as well as in a future version of SQL Server. For more information, see Block future execution of problematic queries in documentation. Update 2025-10-06: The ABORT_QUERY_EXECUTION hint is now generally available. Frequently Asked Questions Is this supported by Microsoft Support during public preview? Yes, just like other query hints. How do I use this? Use Query Store catalog views or the Query Store UI in SSMS to find the query ID of the query you want to block and execute sys.sp_query_store_set_hints specifying that query ID as a parameter. For example: EXEC sys.sp_query_store_set_hints @query_id = 17, @query_hints = N'OPTION (USE HINT (''ABORT_QUERY_EXECUTION''))'; What happens when a query with this hint is executed? This hint is intended to be used as a Query Store hint but can be specified directly as well. In either case, the query fails immediately with error 8778, severity 16: Query execution has been aborted because the ABORT_QUERY_EXECUTION hint was specified. How do I unblock a query? Remove the hint by executing sys.sp_query_store_clear_hints with the query ID value of the query you want to unblock passed via the @query_id parameter. Can I block a query that is not in Query Store? No. At least one execution of the query must be recorded in Query Store. That query execution does not have to be successful. This means that a query that started executing but was canceled or timed out can be blocked too. When I add the hint, does it abort any currently executing queries? No. The hint only aborts future query executions. You can use KILL to abort currently executing queries. What permissions are required to use this? As with all other Query Store hints, the ALTER permission on the database is required to set and clear the hint. Can I block all queries matching a query hash? Not directly. As with all other Query Store hints, you must use a query ID to set and clear a hint. However, you can create automation that will periodically find all new query IDs matching a given query hash and block them. Can I find all blocked queries in Query Store? Yes, by executing the following query: SELECT qsh.query_id, q.query_hash, qt.query_sql_text FROM sys.query_store_query_hints AS qsh INNER JOIN sys.query_store_query AS q ON qsh.query_id = q.query_id INNER JOIN sys.query_store_query_text AS qt ON q.query_text_id = qt.query_text_id WHERE UPPER(qsh.query_hint_text) LIKE '%ABORT[_]QUERY[_]EXECUTION%' Where do I send feedback about this hint? The preferred feedback channel is via https://aka.ms/sqlfeedback. Feedback sent that way is public and can be voted and commented on by other SQL community members. You can also leave comments on this blog post or email us at intelligentqp@microsoft.com.1.5KViews1like0CommentsPassed AZ-500 Exam
I have passed my exam. Initially, I found AZ-500 exam preparation difficult but later I got https://www.p2pexams.com/products/az-500 material from P2PExams. It was a great help for me. This AZ-500 exam study material was well-designed by professionals. It helped me understand all key concepts related to an actual exam. On the exam day. I easily attempted all the questions and got brilliant results on the exam. I especially thank P2PExams. It was really a great resource.860Views0likes1CommentStream data in near real time from SQL to Azure Event Hubs - Public preview
If near-real time integration is something you are looking to implement and you were looking for a simpler way to get the data out of SQL, keep reading. SQL is making it easier to integrate and Change Event Streaming is a feature continuing this trend. Modern applications and analytics platforms increasingly rely on event-driven architectures and real-time data pipelines. As the businesses speed up, real time decisioning is becoming especially important. Traditionally, capturing changes from a relational database requires complex ETL jobs, periodic polling, or third-party tools. These approaches often consume significant cycles of the data source, introduce operational overhead, and pose challenges with scalability, especially if you need one data source to feed into multiple destinations. In this context, we are happy to release Change Event Streaming ("CES") feature into Public Preview for Azure SQL Database. This feature enables you to stream row-level changes - inserts, updates, and deletes - from your database directly to Azure Event Hubs in near real time. Change Event Streaming addresses the above challenges by: Reducing latency: Changes are streamed (pushed by SQL) as they happen. This is in contrast with traditional CDC (change data capture) or CT (change tracking) based approaches, where an external component needs to poll SQL at regular intervals. Traditional approaches allow you to increase polling frequency, but it gets difficult to find a sweet spot between minimal latency and minimal overhead due to too frequent polls. Simplifying architecture: No need for Change Data Capture (CDC), Change Tracking, custom polling or external connectors - SQL streams directly to configured destination. This means simpler security profile (fewer authentication points), fewer failure points, easier monitoring, lower skill bar to deploy and run the service. No need to worry about cleanup jobs, etc. SQL keeps track of which changes are successfully received by the destination, handles the retry logic and releases log truncation point. Finally, with CES you have fewer components to procure and get approved for production use. Decoupling: The integration is done on the database level. This eliminates the problem of dual writes - the changes are streamed at transaction boundaries, once your source of truth (the database) has saved the changes. You do not need to modify your app workloads to get the data streamed - you tap right onto the data layer - this is useful if your apps are dated and do not possess real-time integration capabilities. In case of some 3rd party apps, you may not even have an option to do anything other than database level integration, and CES makes it simpler. Also, the publishing database does not concern itself with the final destination for the data - Stream the data once to the common message bus, and you can consume it by multiple downstream systems, irrespective of their number or capacity - the (number of) consumers does not affect publishing load on the SQL side. Serving consumers is handled by the message bus, Azure Event Hubs, which is purpose built for high throughput data transfers. onceptually visualizing data flow from SQL Server, with an arrow towards Azure Event Hubs, from where a number of arrows point to different final destinations. Key Scenarios for CES Event-driven microservices: They need to exchange data, typically thru a common message bus. With CES, you can have automated data publishing from each of the microservices. This allows you to trigger business processes immediately when data changes. Real-time analytics: Stream operational data into platforms like Fabric Real Time Intelligence or Azure Stream Analytics for quick insights. Breaking down the monoliths: Typical monolithic systems with complex schemas, sitting on top of a single database can be broken down one piece at a time: create a new component (typically a microservice), set up the streaming from the relevant tables on the monolith database and tap into the stream by the new components. You can then test run the components, validate the results against the original monolith, and cutover when you build the confidence that the new component is stable. Cache and search index updates: Keep distributed caches and search indexes in sync without custom triggers. Data lake ingestion: Capture changes continuously into storage for incremental processing. Data availability: This is not a scenario per se, but the amount of data you can tap into for business process mining or intelligence in general goes up whenever you plug another database into the message bus. E.g. You plug in your eCommerce system to the message bus to integrate with Shipping providers, and consequently, the same data stream is immediately available for any other systems to tap into. How It Works CES uses transaction log-based capture to stream changes with minimal impact on your workload. Events are published in a structured JSON format following the CloudEvents standard, including operation type, primary key, and before/after values. You can configure CES to target Azure Event Hubs via AMQP or Kafka protocols. For details on configuration, message format, and FAQs, see the official documentation: Feature Overview CES: Frequently Asked Questions Get Started Public preview CES is available today in public preview for Azure SQL Database and as a preview feature in SQL Server 2025. Private preview CES is also available as a private preview for Azure SQL Managed Instance and Fabric SQL database: you can request to join the private preview by signing up here: https://aka.ms/sql-ces-signup We encourage you to try the feature out and start building real-time integrations on top of your existing data. We welcome your feedback—please share your experience through Azure Feedback portal or support channels. The comments below on this blog post will also be monitored, if you want to engage with us. Finally, CES team can be reached via email: sqlcesfeedback [at] microsoft [dot] com. Useful resources Free Azure SQL Database. Free Azure SQL Managed Instance.597Views0likes0CommentsAzure SQL Database LTR Backup Immutability is now Generally Available
Azure SQL Database is a fully managed, always‑up‑to‑date relational database service built for mission‑critical apps. It delivers built‑in high availability, automated backups, and elastic scale, with strong security and compliance capabilities. Today, I am very excited to announce the General Availability of immutability for Azure SQL DB LTR backups! Azure SQL Database now supports immutable long‑term retention (LTR) backups, stored in write‑once, read‑many (WORM) state for a fixed (customer configured) period. That means your LTR backups cannot be modified or deleted during the lock window—even by highly privileged identities—helping you preserve clean restore points after a cyberattack and strengthen your compliance posture. Why this matters: ransomware targets backups Modern ransomware playbooks don’t stop at encrypting production data—they also attempt to alter or delete backups to block recovery. With backup immutability, Azure SQL Database LTR backups are written to immutable storage and locked for the duration you specify, providing a resilient, tamper‑proof recovery layer so you can restore from a known‑good copy when it matters most. What we’re announcing General Availability of Backup Immutability for Long‑Term Retention (LTR) backups in Azure SQL Database. This GA applies to Azure SQL database LTR backups. What immutability does (and doesn’t) do Prevents changes and deletion of LTR backup artifacts for a defined, locked period (WORM). This protection applies even to highly privileged identities, reducing the risk from compromised admin accounts or insider misuse. Helps address regulatory WORM expectations, supporting customers who must retain non‑erasable, non‑rewritable records (for example, requirements under SEC Rule 17a‑4(f), FINRA Rule 4511(c), and CFTC Rule 1.31(c)–(d)). Always consult your legal/compliance team for your specific obligations. Complements a defense‑in‑depth strategy—it’s not a replacement for identity hygiene, network controls, threat detection, and recovery drills. See Microsoft’s broader ransomware guidance for Azure. How it works (at a glance) When you enable immutability on an LTR policy, Azure SQL Database stores those LTR backups on Azure immutable storage in a WORM state. During the lock window, the backup cannot be modified or deleted; after the lock expires, normal retention/deletion applies per your policy. Key benefits Ransomware‑resilient recovery: Preserve clean restore points that attackers can’t tamper with during the lock period. Compliance‑ready retention: Use WORM‑style retention to help meet industry and regulatory expectations for non‑erasable, non‑rewritable storage. Operational simplicity: Manage immutability alongside your existing Azure SQL Database long‑term retention policies. Get started Choose databases that require immutable LTR backups. Enable immutability on the LTR backup policy and set the retention/lock period aligned to your regulatory and risk requirements. Validate recovery by restoring from an immutable LTR backup. Documentation: Learn more about backup immutability for LTR backups in Azure SQL Database in Microsoft Learn. Tell us what you think We’d love your feedback on scenarios, guidance, and tooling that would make immutable backups even easier to adopt. Share your experiences and suggestions in the Azure SQL community forums and let us know how immutability is helping your organization raise its cyber‑resilience.299Views1like0CommentsGeneral Availability announcement of Backup/Restore capabilities in SQL Server 2025
Over the past several months, we’ve heard from countless customers who are eager for more robust options to protect, compress, and safeguard their SQL Server data. Since introducing these features in public preview, organizations of all sizes have validated their value in real-world workloads and provided invaluable feedback. Today, we are thrilled to announce the General Availability (GA) of three powerful SQL Server 2025 features: Backups on Secondary for Always On Availability Groups, ZSTD Compression, and Immutable Backups for Ransomware Protection. These advancements are now ready for production use, built around the needs and requests of the SQL Server customers. Backups on Secondary for SQL Server Always On Availability Groups Previously in preview, this feature now reaches GA, allowing you to offload backup operations to secondary replicas in Always On Availability Groups. This enhancement optimizes resource utilization and minimizes overhead on primary replicas, ensuring better performance for mission-critical workloads. What’s New in GA: Improved reliability and support for production environments. Key Benefits: Comprehensive support: Full, differential, and transaction log backups are now fully supported on secondary replicas—not just COPY_ONLY and transaction logs. Reduced impact on primary replica performance. Simplified high-availability strategies. Learn more from the original announcement: Introducing Backups on Secondary for SQL Server Always On Availability Groups. ZSTD Compression in SQL Server 2025 The GA of ZSTD compression brings modern, efficient data compression to SQL Server. ZSTD compression introduces industry-leading performance and efficiency, letting you save storage and speed up workloads. What’s New in GA: Full production support for ZSTD across key workloads. Key Benefits: Faster compression and decompression compared to legacy algorithms. Lower storage footprint without sacrificing performance. Choose your algorithm: ZSTD is now a standard option right alongside MS_XPRESS for row, page, and backup compression. Tunable compression levels: Administrators can select from LOW, MEDIUM, or HIGH to balance resource use and savings. Ideal for large-scale data environments. Explore the preview details: ZSTD Compression in SQL Server 2025. Backups to immutable storage: A Powerful Shield Against Ransomware Your backups are now safer than ever. Thanks to native support for immutability with Azure Blob Storage, backup files can be rendered tamper-proof—protecting them from ransomware or even accidental deletion. Key Benefits: Strong defense against ransomware and malicious tampering. Compliance with regulatory requirements for data integrity. Peace of mind for critical backup strategies. Read the detailed use-case and how-to: Immutability: A Powerful Shield Against Ransomware in SQL Environments. These features collectively empower organizations to: Optimize performance and resource utilization. Reduce operational costs through efficient compression. Strengthen security posture against evolving threats. Get Started Today These features are available to all SQL Server 2025 customers. Ready to elevate your data protection, efficiency, and compliance posture? Access the official SQL Server 2025 documentation for step-by-step guides via visit Microsoft Learn. Review upgrade guidance and best practices. Explore real-world configurations and FAQs. Upgrade now and unlock the next level of resilience, efficiency, and security for your SQL Server workloads!258Views1like0Comments