azure data lake storage
12 TopicsTLS 1.0 and 1.1 support will be removed for new & existing Azure storage accounts starting Feb 2026
To meet evolving technology and regulatory needs and align with security best practices, we are removing support for Transport Layer Security (TLS) 1.0 and 1.1 for both existing and new storage accounts in all clouds. TLS 1.2 will be the minimum supported TLS version for Azure Storage starting February 2026. Azure Storage currently supports TLS 1.0 and 1.1 (for backward compatibility) and TLS 1.2 on public HTTPS endpoints. TLS 1.2 is more secure and faster than older TLS versions. TLS 1.0 and 1.1 do not support modern cryptographic algorithms and cipher suites. Many of the Azure storage customers are already using TLS 1.2 and we are sharing this guidance to expedite the transition for customers currently on TLS 1.0 and 1.1. Customers must secure their infrastructure by using TLS 1.2+ with Azure Storage by Jan 31, 2026. The older TLS versions (1.0 and 1.1) are being deprecated and removed to meet evolving standards (FedRAMP, NIST), and provide improved security for our customers. This change will impact both existing and new storage accounts using TLS 1.0 and 1.1. To avoid disruptions to your applications connecting to Azure Storage, you must migrate to TLS 1.2 and remove dependencies on TLS version 1.0 and 1.1, by Jan 31, 2026. Learn more about how to migrate to TLS1.2. As best practice, we also recommend using Azure policy to enforce a minimum TLS version. Learn more here about how to enforce a minimum TLS version for all incoming requests. If you already use Azure Policy to enforce TLS version, minimum supported version after this change rolls out will be TLS 1.2. Help and Support If you have questions, get answers from community experts in Microsoft Q&A. If you have a support plan and you need technical help, create a support request: For Issue type, select Technical. For Subscription, select your subscription. For Service, select My services. For Service type, select Blob Storage. For Resource, select the Azure resource you are creating a support request for. For Summary, type a description of your issue. For Problem type, select Connectivity For Problem subtype, select Issues using TLS.56KViews2likes5CommentsProtect Azure Data Lake Storage with Vaulted Backups
We are thrilled to announce a limited public preview of vaulted backups for Azure Data Lake Storage. This is available now for test workloads and we’d like to get your feedback. Vaults are secure, encrypted copies of your data, enabling restoration to an alternate location in cases of accidental or malicious deletion. Vaulted backups are fully isolated from the source data, ensuring continuity for your business operations even in scenarios where the source data is compromised. This fully managed solution leverages the Azure Backup service to manage backups with automated retention and scheduling. By creating a backup policy, you can define a backup schedule and retention period. Based on this policy, Azure Backup service generates recovery points and manages the lifecycle of backups seamlessly. Ways vaulted backups protect your data: Isolation from Production Data – Vaulted backups are stored in a separate, Microsoft-managed tenant, preventing attackers from accessing both primary and backup data. Strict Access Controls – Backup management requires distinct permissions, ensuring segregation of duties and reducing insider threats. Advanced Security Features – With features like soft delete, immutability, and encryption, vaulted backups safeguard data against unauthorized modifications and premature deletions. Even if attackers compromise the primary storage account, backups remain secure within the vault, preserving data integrity and ensuring compliance. Alternate location recovery - Vaulted backups provide a reliable recovery solution by enabling restoration to an alternate storage account, ensuring business continuity even when the original account is inaccessible. Additionally, this capability allows organizations to create separate data copies for purposes such as testing, development, or analytics, without disrupting production environments. Granular recovery - With vaulted backups, you can restore the entire storage account or specific containers based on your needs. You can also use prefix matching to recover select blobs. With the growing frequency and sophistication of cyberattacks, protecting your data against loss or corruption is more critical than ever. Consider the following example use case where having vaulted backups can save the day. Enhanced Protection Against Ransomware Attacks Ransomware attacks can encrypt critical data, complicating recovery unless a ransom is paid. Vaulted backups offer an independent and secure recovery solution, allowing you to restore data without succumbing to attackers' demands. Accidental or Malicious Storage Account Deletion Human errors, insider threats, or compromised credentials can result in the deletion of entire storage accounts. Vaulted backups provide a crucial layer of protection by storing backups in Microsoft-managed storage, independent of your primary storage account. This ensures that an additional copy of your data remains intact, even if the original storage account is accidentally or maliciously deleted. Compliance Regulations Certain industries mandate offsite backups and long-term data retention to meet regulatory standards. Vaulted backups enable organizations to comply by offering offsite backup storage within the same Azure region as the primary storage account. With vaulted backups, data can be retained for up to 10 years. Getting started To enroll in the preview fill out this form. For more details, refer to this article. Vaulted backups can be configured for block blobs within HNS-enabled, standard general-purpose v2 ADLS storage accounts in specified regions here. Support for additional regions will be added incrementally. Currently, this preview is recommended exclusively for testing purposes. The Azure Backup protected instance fee and the vault backup storage fees are not currently charged. Now is a great time to give vaulted backups a try! Contact us If you have questions or feedback, please reach out to us at AskAzureBackupTeam@microsoft.com.372Views0likes0CommentsMicrosoft Purview Protection Policies for Azure Data Lake & Blob Storage Available in All Regions
Organizations today face a critical challenge: ensuring consistent and automated data governance across rapidly expanding data estates. Driven by the growth of AI and the increasing reliance on vast data volumes for model training, Chief Data Officers (CDOs) and Chief Information Security Officers (CISOs) must prevent unintentional exposure of sensitive data (PII, credit card information) while adhering to data and legal regulations. Many organizations rely on Azure Blob Storage and ADLS for storing vast amounts of data, offering scalable, secure, and highly available cloud storage solutions. While solutions like RBAC (role-based access control), ABAC (attribute-based access control), and ACLs (Access Control Lists) offer secure ways to manage data access, they can operate on metadata such as file paths, tags, or container names. These mechanisms are effective for implementing restrictive data governance by controlling who can access specific files or containers. However, there are scenarios were implementing automatic access controls based on the sensitivity of the content itself is necessary. For example, identifying and protecting sensitive information like credit card numbers within a blob requires more granular control. Ensuring that sensitive content is restricted to specific roles and applications across the organization is crucial, especially as enterprises focus on building new applications and infusing AI into current solutions. This is where integrated solutions like Microsoft Information Protection (MIP) come into play. Microsoft Information Protection (MIP) protection policies provide a solution by enabling organizations to scan and label data based on the content stored in the blob. This allows for applying access controls directly related to the data asset content across storage accounts. By eliminating the need for in-house scanning and labeling, MIP streamlines compliance and helps in applying consistent data governance using a centralized solution. The Solution: Microsoft Purview Information Protection (MIP) Protection Policies for Governance & Compliance Microsoft Purview Information Protection (MIP) provides an efficient and centralized approach to data protection by automatically restricting access to storage data assets based on sensitivity labels discovered through automated scanning and leveraging Protection policies (learn more). This feature builds upon Microsoft Purview's existing capability (learn more) to scan and label sensitive data assets, ensuring robust data protection. This not only enhances data governance but also ensures that data is managed in a way that protects sensitive information, reducing the risk of unauthorized access and maintaining the security and trust of customers. Enhancing Data Governance with MIP Protection policies: Contoso, a multinational corporation, handles large volumes of data stored in Azure Storage (Blob/ADLS). Different users, such as financial auditors, legal advisors, compliance officers, and data analysts, need access to different blobs in the Storage account. These blobs are updated daily with new content, and there can be sensitive data across these blobs. Given the diverse nature of the stored data, Contoso needed an access control method that could restrict access based on data asset sensitivity. For instance, data analysts access the blob named "logs" where log files are uploaded. If these files contain PII or financial data, which should only be accessed by financial officers, the access permissions need to be dynamically updated based on the changing sensitivity of the stored data. MIP protection policies can address this challenge efficiently by automatically limiting access to data based on sensitivity labels found through automated scanning. Key Benefits: Auto-labelling: Automatically apply sensitivity labels to Azure Storage based on detection of sensitive information types. Automated Protection: Automatically restrict access to data with specific sensitivity labels, ensuring consistent data protection. Storage Data Owners can selectively enable specific storage accounts for policy enforcement, providing flexibility and control. Like a protection policy that restricted access to data labeled as "Highly Confidential" to only specific groups or users. For instance, blobs labeled with "logs" were accessible only to data analysts. With MIP, the labels are updated based on content changes, and the protection policy can deny access if the content if any “Highly Confidential” data is identified. Enterprise-level Control: Information Protection policies are applied to blobs and resource sets, ensuring that only authorized Azure Entra ID users or M365 user groups can access sensitive data. Unauthorized users will be prevented from reading the blob or resource set. Centralized Policy Management: Create, manage, and enforce protection policies across Azure Storage from a single, unified interface in Microsoft Purview. Enterprise admins have granular control over which storage accounts enforce protection coverage based on the account’s sensitivity label. By using Microsoft Purview Information Protection (MIP) Protection Policies, Contoso was able to achieve secure and consistent data governance, and centralized policy management, effectively addressing their data security challenges Prerequisites Microsoft 365 E5 licenses and setup of pay as you go billing model. To understand pay as you go billing by assets protected, see the pay-as-you-go billing model. For information about the specific licenses required, see this information on sensitivity labels. Microsoft 365 E5 trial licenses can be attained for your tenant by navigating here from your environment. Getting Started The public preview of Protection Policies supports the following Azure Storage services: Azure Blob Storage Azure Data Lake Storage To enable Protection Policies for your Azure Storage accounts: Navigate to the Microsoft Purview portal> Information Protection card > Policies. Configure or use an existing sensitivity label in Microsoft Purview Information Protection that’s scoped to “Files & other data assets” Create an auto-labelling to apply a specific sensitivity label to scoped assets in Azure Storage based on Microsoft out-of-the-box sensitive info types detected. Run scans on assets for auto-labelling to apply. Create a protection policy and associate it with your desired sensitivity labels. Apply the policy to your Azure Blob Storage or ADLS Gen2 accounts. Limitations During the public preview, please note the following limitations: Currently a maximum of 10 storage accounts are supported in one protection policy, and they must be selected under Edit for them to be enabled. Changing pattern rules will re-apply labels on all storage accounts. During the public preview, there might be delays in label synchronization, which could prevent MIP policies from functioning effectively. If customer storage account enables CMK, the storage account MIP policy will not work. Next Steps With the Public Preview, MIP Protection policies is now available in all regions, and any storage account registered on the Microsoft Purview Data Map can create and apply protection policies to implement consistent data governance strategies across their data in Azure Storage. We encourage you to try out this feature and provide feedback. Your input is crucial in shaping this feature as we work towards general availability.1.9KViews0likes0CommentsControl geo failover for ADLS and SFTP with unplanned failover.
We are excited to announce the General Availability of customer managed unplanned failover for Azure Data Lake Storage and storage accounts with SSH File Transfer Protocol (SFTP) enabled. What is Unplanned Failover? With customer managed unplanned failover, you are in control of initiating your failover. Unplanned failover allows you to switch your storage endpoints from the primary region to the secondary region. During an unplanned failover, write requests are redirected to the secondary region, which then becomes the new primary region. Because an unplanned failover is designed for scenarios where the primary region is experiencing an availability issue, unplanned failover happens without the primary region fully completing replication to the secondary region. As a result, during an unplanned failover there is a possibility of data loss. This loss depends on the amount of data that has yet to be replicated from the primary region to the secondary region. Each storage account has a ‘last sync time’ property, which indicates the last time a full synchronization between the primary and the secondary region was completed. Any data written between the last sync time and the current time may only be partially replicated to the secondary region, which is why unplanned failover may incur data loss. Unplanned failover is intended to be utilized during a true disaster where the primary region is unavailable. Therefore, once completed, the data in the original primary region is erased, the account is changed to locally redundant storage (LRS) and your applications can resume writing data to the storage account. If the previous primary region becomes available again, you can convert your account back to geo-redundant storage (GRS). Migrating your account from LRS to GRS will initiate a full data replication from the new primary region to the secondary which has geo-bandwidth costs. If your scenario involves failing over while the primary region is still available, consider planned failover. Planned failover can be utilized in scenarios including planned disaster recovery testing or recovering from non-storage related outages. Unlike unplanned failover, the storage service endpoints must be available in both the primary and secondary regions before a planned failover can be initiated. This is because planned failover is a 3-step process that includes: (1) making the current primary read only, (2) syncing all the data to the secondary (ensuring no data loss), and (3) swapping the primary and secondary regions so that writes are now in the new region. In contrast with unplanned failover, planned failover maintains the geo-redundancy of the account so planned failback does not require a full data copy. To learn more about planned failover and how it works view, Public Preview: Customer Managed Planned Failover for Azure Storage | Microsoft Community Hub To learn more about each failover option and the primary use case for each view, Azure storage disaster recovery planning and failover - Azure Storage | Microsoft Learn How to get started? Getting started is simple, to learn more about the step-by-step process to initiate an unplanned failover review the documentation: Initiate a storage account failover - Azure Storage | Microsoft Learn Feedback If you have questions or feedback, reach out at storagefailover@service.microsoft.com415Views0likes0CommentsDremio Cloud on Microsoft Azure enables customers to drive value from their data more easily
Dremio Cloud on Microsoft Azure enables customers to drive value from their data more easily. It helps overcoming challenges of grown data lake and database landscapes. Particularly in hybrid environments it allows to shield change from business while at the same time tightening security and eases application integration.2.5KViews1like0CommentsCopy Dataverse data from ADLS Gen2 to Azure SQL DB leveraging Azure Synapse Link
A new template has been added to the ADF and Azure Synapse Pipelines template gallery. This template allows you to copy data from ADLS (Azure Data Lake Storage) Gen2 account to an Azure SQL Database.8.8KViews1like1CommentUnable to load large delta table in azure ml studio
I am writing to report an issue that I am currently experiencing while trying to read a delta table from Azure ML. I have already created data assets to register the delta table, which is located at an ADLS location. However, when attempting to load the data, I have noticed that for large data sizes it is taking an exceedingly long time to load. I have confirmed that for small data sizes, the data is returned within few seconds, which leads me to believe that there may be an issue with the scalability of the data loading process. I would greatly appreciate it if you could investigate this issue and provide me with any recommendations or solutions to resolve this issue. I can provide additional details such as the size of the data, the steps I am taking to load the data, and any error messages if required. I'm following this document: https://learn.microsoft.com/en-us/python/api/mltable/mltable.mltable?view=azure-ml-py#mltable-mltable-from-delta-lake Using this command to read delta table using data asset URI from mltable import from_delta_lake mltable_ts = from_delta_lake(delta_table_uri=<DATA ASSET URI>, timestamp_as_of="2999-08-26T00:00:00Z", include_path_column=True)586Views0likes0Comments