azure data lake storage
15 TopicsBeyond Basics: Practical scenarios with Azure Storage Actions
If you are new to Azure Storage Actions, check out our GA announcement blog for an introduction. This post is for cloud architects, data engineers, and IT admins who want to automate and optimize data governance at scale. The Challenge: Modern Data Management at Scale As organizations generate more data than ever, managing that data efficiently and securely is a growing challenge. Manual scripts, periodic audits, and ad-hoc cleanups can’t keep up with the scale, complexity, and compliance demands of today’s cloud workloads. Teams need automation that’s reliable, scalable, and easy to maintain. Azure Storage Actions delivers on this need by enabling policy-driven automation for your storage accounts. With Storage Actions, you can: Automate compliance (e.g., legal holds, retention) Optimize storage costs (e.g., auto-tiering, expiry) Reduce operational overhead (no more custom cleanup scripts) Improve data discoverability (tagging, labeling) Real-World Scenarios: Unlocking the Power of Storage Actions Let’s explore 3 practical scenarios where Storage Actions can transform customers’ data management approach. For each, we’ll look at the business problem, the traditional approach, and how Storage Actions makes it easier with the exact conditions and operations which can be used. Scenario 1: Content Lifecycle for Brand Teams Business Problem: Brand and marketing teams manage large volumes of creative assets - videos, design files, campaign materials that evolve through multiple stages and often carry licensing restrictions. These assets need to be retained, frozen, or archived based on their lifecycle and usage rights. Traditionally, teams rely on scripts or manual workflows to manage this, which can be error-prone, slow, and difficult to scale. How Storage Actions Helps: Azure Storage Actions enables brand teams to automate the content lifecycle management using blob metadata and / or index tag. With a single task definition using an IF and ELSE structure, teams can apply different operations to blobs based on their stage, licensing status, and age without writing or maintaining scripts. Example in Practice: Let’s say a brand team manages thousands of creative assets videos, design files, campaign materials each tagged with blob metadata that reflects its lifecycle stage and licensing status. For instance: Assets that are ready for public use are tagged with asset-stage = final Licensed or restricted-use content is tagged with usage-rights = restricted Over time, these assets accumulate in your storage account, and you need a way to: Ensure that licensed content is protected from accidental deletion or modification Archive older final assets to reduce storage costs Apply these rules automatically, without relying on scripts or manual reviews With Azure Storage Actions, the team can define a single task that evaluates each blob and applies the appropriate operation using a simple IF and ELSE structure: IF: - Metadata.Value["asset-stage"] equals "final" - AND Metadata.Value["usage-rights"] equals “restricted” - AND creationTime < 60d THEN: - SetBlobLegalHold: This locks the blob to prevent deletion or modification, ensuring compliance with licensing agreements. - SetBlobTier to Archive: This moves the blob to the Archive tier, significantly reducing storage costs for older content that is rarely accessed. ELSE - SetBlobTier to Cool: If the blob does not meet the above criteria whether it’s a draft, unlicensed, or recently created, it is moved to the Cool tier. Once this Storage Action is created and assigned to a storage account, it is scheduled to run automatically every week. During each scheduled run, the task evaluates every blob in the target container or account. For each blob, it checks if the asset is marked as final, tagged with usage-rights, and older than 60 days. If all these conditions are met, the blob is locked with a legal hold to prevent accidental deletion and then archived to optimize storage costs. If the blob does not meet all of these criteria, it is moved to the Cool tier, ensuring it remains accessible but stored more economically. This weekly automation ensures that every asset is managed appropriately based on its metadata, without requiring manual intervention or custom scripts. Scenario 2: Audit-Proof Model Training Business Problem: In machine learning workflows, ensuring the integrity and reproducibility of training data is critical especially when models influence regulated decisions in sectors like automotive, finance, healthcare, or legal compliance. Months or even years after a model is deployed, auditors or regulators may request proof that the training data used has not been altered since the model was built. Traditionally, teams try to preserve training datasets by duplicating them into backup storage, applying naming conventions, and manually restricting access. These methods are error-prone, hard to enforce at scale, and lack auditability. How Storage Actions Helps: Storage Actions enables teams to automate the preservation of validated training datasets using blob tags and immutability policies. Once a dataset is marked as clean and ready for training, Storage Actions can automatically: Lock the dataset using a time-based immutability policy Apply a tag to indicate it is a snapshot version This ensures that the dataset cannot be modified or deleted for the duration of the lock, and it is easily discoverable for future audits. Example in Practice: Let’s say an ML data pipeline tags a dataset with stage=clean after it passes validation and is ready for training. Storage Actions detects this tag and springs into action. It enforces a 1-year immutability policy, which means the dataset is locked and cannot be modified or deleted for the next 12 months. It also applies a tag snapshot=true, making it easy to locate and reference in future audits or investigations. The following conditions and operations define the task logic: IF: - Tags.Value[stage] equals 'clean' THEN: - SetBlobImmutabilityPolicy for 1-year: This adds a write once, read many (WORM) immutability policy on the blob to prevent deletion or modification, ensuring compliance. - SetBlobTags with snapshot=true: This adds a blob index tag with name “snapshot” and value “true”. Whenever this task runs on its scheduled interval - such as daily or weekly, it detects if a blob has the tag stage = 'clean', it automatically initiates the configured operations. In this case, Storage Actions applies a SetBlobImmutabilityPolicy on the blob for one year and adds a snapshot=true tag for easy identification. This means that without any manual intervention: The blob is made immutable for 12 months, preventing any modifications or deletions during that period. A snapshot=true tag is applied, making it easy to locate and audit later. No scripts, manual tagging, or access restrictions are needed to enforce data integrity. This ensures that validated training datasets are preserved in a tamper-proof state, satisfying audit and compliance requirements. It also reduces operational overhead by automating what would otherwise be a complex and error-prone manual process. Scenario 3: Embedding Management in AI Workflows Business Problem: Modern AI systems, especially those using Retrieval-Augmented Generation (RAG), rely heavily on vector embeddings to represent and retrieve relevant context from large document stores. These embeddings are often generated in real time, chunked into small files, and stored in vector databases or blob storage. As usage scales, these systems generate millions of small embedding files, many of which become obsolete quickly due to frequent updates, re-indexing, or model version changes. This silent accumulation of stale embeddings leads to: Increased storage costs Slower retrieval performance Operational complexity in managing the timings Traditionally, teams write scripts to purge old embeddings based on timestamps, run scheduled jobs, and manually monitor usage. This approach is brittle and does not scale well. How Storage Actions Helps: Storage Actions enables customers to automate the management of embeddings using blob tags and metadata. With blobs being identified with tags and metadata such as embeddings=true, modelVersion=latest, customers can define conditions that automatically delete stale embeddings without writing custom scripts. Example in Practice: In production RAG systems, embeddings are frequently regenerated to reflect updated content, new model versions, or refined chunking strategies. For example, a customer support chatbot may re-index its knowledge base daily to ensure responses are grounded in the latest documentation. To avoid bloating storage with outdated vector embeddings, Storage Actions can automate cleanup with task conditions and operation such as: IF: - Tags.Value[embeddings] equals 'true' - AND NOT Tags.Value[version] equals ‘latest’ - AND creation time < 12 days ago THEN: - DeleteBlob: This deletes all blobs which match the IF condition criteria. Whenever this Storage Action runs on its scheduled interval - such as daily - it scans for blobs that have the tag embeddings = ‘true’ and is not the latest version with its age being more than 12 days old, it automatically initiates the configured operation. In this case, Storage Actions does a DeleteBlob operation on the blob. This means that without any manual intervention: The stale embeddings are deleted No scripts or scheduled jobs are needed to track. This ensures that only the most recent model’s embeddings are retained, keeping the vector store lean and performant. It also reduces storage costs by eliminating obsolete data and helps maintain retrieval accuracy by ensuring outdated embeddings do not interfere with current queries. Applying Storage Actions to Storage Accounts To apply any of the scenarios, customers create an assignment during the storage task resource creation. In the assignment creation flow, they select the appropriate role and configure filters and trigger details. For example, a compliance cleanup scenario might run across the entire storage account with a recurring schedule every seven days to remove non-compliant blobs. A cost optimization scenario could target a specific container using a blob prefix and run as a one-time task to archive older blobs. A bulk tag update scenario would typically apply to all blobs without filtering and use a recurring schedule to keep tags consistent. After setting start and end dates, specifying the export container, and enabling the task, clicking Add queues the action to run on the account. Learn More If you are interested in exploring Storage Actions further, there are several resources to help you get started and deepen your understanding: Documentation on Getting Started: https://learn.microsoft.com/en-us/azure/storage-actions/storage-tasks/storage-task-quickstart-portal Create a Storage Action from the Azure Portal: https://portal.azure.com/#create/Microsoft.StorageTask Azure Storage Actions pricing: https://azure.microsoft.com/en-us/pricing/details/storage-actions/#pricing Azure Blog about the GA announcement: https://azure.microsoft.com/en-us/blog/unlock-seamless-data-management-with-azure-storage-actions-now-generally-available/ Azure Skilling Video with a walkthrough of Storage Actions: https://www.youtube.com/watch?v=CNdMFhdiNo8 Have questions, feedback, or a scenario to share? Drop a comment below or reach out to us at storageactions@microsoft.com. We would love to hear how you are using Storage Actions and what scenarios you would like to see next!319Views1like0CommentsIntroducing Cross Resource Metrics and Alerts Support for Azure Storage
Aggregate and analyze storage metrics from multiple storage accounts in a single chart. We’re thrilled to announce a highly requested feature: Cross Resource Metrics and Alerts support for Azure Storage! With this new capability, you can now monitor and visualize metrics across multiple storage accounts in a single chart and configure alerts across multiple accounts — within the same subscription and region. This makes managing large fleets of storage accounts significantly easier and more powerful. What’s New Cross Resource Metrics Support Visualize aggregated metric data across multiple storage accounts. Break down metrics by individual resources in a sorted and ordered way. Cross Resource Alerting Support Create a single alert rule that monitors a metric across many storage accounts and triggers an action when thresholds are crossed on any resource. Full Metric Namespace Support Works across Blob, File, Table, and Queue metric namespaces. All existing storage metrics are supported for cross resource visualization and alerting. Why This Matters Centralized Monitoring for Large Environments Manage and monitor dozens (or hundreds) of storage accounts at once with a unified view. Fleet-wide Alerting Set up a single alert that covers your whole storage fleet, ensuring you are quickly notified if any account experiences performance degradation or other issues. Operational Efficiency Helps operations teams scale monitoring efforts without needing to configure and manage separate dashboards and alerts for each account individually. How To Get Started Step 1: Create a Cross Resource Metrics Chart Go to Azure Monitor -> Metrics. Scope Selection: Under Select a scope, select the same Metric Namespace (blob/file/queue/table) for multiple Storage Accounts from the same Subscription and Region. Click Apply. In the below example, two storage accounts have been selected for metrics in the blob metric namespace. Configure Metric Chart: Select a Metric (e.g., Blob Capacity, Transactions, Ingress) Aggregation: By default, a Split by clause on ResourceId is applied to view individual account breakdowns. Or view aggregated data across all selected accounts by removing the Split by clause. Example As another example, lets monitor total transactions across storage accounts on the Hot tier to view aggregate or per-account breakdown in a single graph. From the same view, select the Transactions metric instead. Select 5 storage accounts by using the Add Filter clause and filtering by the ResourceId property. Add another filter and select a specific tier, say Hot. This will show aggregated transactions on data in the Hot tier per minute across all selected storage accounts. Select Apply Splitting and select ResourceId to view an ordered breakdown of transactions per minute for all the Storage accounts in scope. In this specific example, only 4 storage accounts are shown since 1 storage account is excluded based on the Tier filter. Step 2: Set Up Cross Resource Alert Rules Click on New alert rule from the chart view shown above in order to create an alert that spans the 5 storage accounts above and get alerted when any account breaches a certain transactions limit over a 5 minute period. Configure required values for the Threshold, Unit and Value is fields. This defines when the alert should fire (e.g., Transactions > 5000) Under the Split by dimensions section, ensure that the Microsoft.ResourceId dimension is not included. Under Actions, attach an Action Group (Email, Webhook, Logic App, etc.). Review and Create. Final Thoughts Cross Resource Metrics and Alerts for Azure Storage makes monitoring and management at scale much more intuitive and efficient. Whether you're overseeing 5 storage accounts or 500, you can now visualize performance and respond to issues faster than ever. And you can do it for metrics across multiple storage services including blobs, queues, files and tables! We can't wait to hear how you use this feature! Let us know your feedback by commenting below or visiting Azure Feedback.250Views2likes0CommentsTLS 1.0 and 1.1 support will be removed for new & existing Azure storage accounts starting Feb 2026
This post was edited in September 2025 to reflect the retirement date change from November 1, 2025 to February 3, 2026. To meet evolving technology and regulatory needs and align with security best practices, we are removing support for Transport Layer Security (TLS) 1.0 and 1.1 for both existing and new storage accounts in all clouds. TLS 1.2 will be the minimum supported TLS version for Azure Storage starting February 3, 2026. Azure Storage currently supports TLS 1.0 and 1.1 (for backward compatibility) and TLS 1.2 on public HTTPS endpoints. TLS 1.2 is more secure and faster than older TLS versions. TLS 1.0 and 1.1 do not support modern cryptographic algorithms and cipher suites. Many of the Azure storage customers are already using TLS 1.2 and we are sharing this guidance to expedite the transition for customers currently on TLS 1.0 and 1.1. Customers must secure their infrastructure by using TLS 1.2+ with Azure Storage by February 2, 2026. The older TLS versions (1.0 and 1.1) are being deprecated and removed to meet evolving standards (FedRAMP, NIST), and provide improved security for our customers. This change will impact both existing and new storage accounts using TLS 1.0 and 1.1. To avoid disruptions to your applications connecting to Azure Storage, you must migrate to TLS 1.2 and remove dependencies on TLS version 1.0 and 1.1, by February 2, 2026. Learn more about how to migrate to TLS1.2. As best practice, we also recommend using Azure policy to enforce a minimum TLS version. Learn more here about how to enforce a minimum TLS version for all incoming requests. If you already use Azure Policy to enforce TLS version, minimum supported version after this change rolls out will be TLS 1.2. Help and Support If you have questions, get answers from community experts in Microsoft Q&A. If you have a support plan and you need technical help, create a support request: For Issue type, select Technical. For Subscription, select your subscription. For Service, select My services. For Service type, select Blob Storage. For Resource, select the Azure resource you are creating a support request for. For Summary, type a description of your issue. For Problem type, select Connectivity For Problem subtype, select Issues using TLS.69KViews2likes5CommentsProtect your Storage accounts using network security perimeter - now generally available
We are excited to announce the general availability of network security perimeter support for Azure Storage accounts. A network security perimeter allows organizations to define a logical network isolation boundary for Platform-as-a-Service (PaaS) resources like Azure Storage accounts that are deployed outside your organization’s virtual networks. This restricts public network access to PaaS resources by default and provides secure communication between resources within the perimeter. Explicit inbound and outbound rules allow access to authorized resources. Securing data within storage accounts requires a multi-layered approach, encompassing network access controls, authentication, and authorization mechanisms. Network access controls for storage accounts can be defined into two broad categories - access from PaaS resources and from all other resources. For access from PaaS resources, organizations can leverage either broad controls through Azure “trusted services” or granular access using resource instance rul es. For other resources, access control may involve IP-based firewall rules, allowing virtual network access, or by enabling private endpoints. However, the complexity of managing all these can pose significant challenges when scaled across large enterprises. Misconfigured firewalls, public network exposure on storage accounts, or excessively permissive policies heighten the risk of data exfiltration. It is often challenging to audit these risks at the application or storage account level, making it difficult to identify open exfiltration paths throughout all PaaS resources in an environment. Network security perimeters offer an effective solution to these concerns. First, by grouping assets such as Azure Key Vault and Azure Monitor in the same perimeter as your storage accounts, communications between these resources are secured while disabling public access by default, thereby preventing data exfiltration to unauthorized destinations. Then, they centralize the management of network access controls across numerous PaaS resources at scale by providing a single pane of glass. This approach promotes consistency in settings and reduces administrative overhead, thereby minimizing potential for configuration errors. Additionally, they provide comprehensive control over both inbound and outbound access across all the associated PaaS resources to authorized resources. How do network security perimeters protect Azure Storage Accounts? Network security perimeters support granular resource access using profiles. All inbound and outbound rules are defined on a profile, and the profile can be applied to single or multiple resources within the perimeter. Network security perimeters provide two primary operating modes: “Transition” mode (formerly referred to as “Learning” mode) and “Enforced” mode. “Transition” mode acts as an initial phase when onboarding a PaaS resource into any network security perimeter. When combined with logging, this mode enables you to analyze current access patterns without disrupting existing connectivity. “Enforced” mode is when are all defined perimeter rules replace all resource specific rules except private end points. After analyzing logs in “Transition” mode, you can tweak your perimeter rules as necessary and then switch to “Enforced” mode. Benefits of network security perimeters Secure resource-to-resource communication: Resources in the same perimeter communicate securely, keeping data internal and blocking unauthorized transfers. For example, an application’s storage account and its associated database, when part of the same perimeter can communicate securely. However, all communications from another database outside of the perimeter will be blocked to the storage account. Centralized network isolation Administrators can manage firewall and resource access policies centrally in network security perimeters across all their PaaS resources in a single pane of glass, streamlining operations and minimizing errors. Prevent data exfiltration: Centralized access control and logging of inbound and outbound network access attempts across all resources within a perimeter enables comprehensive visibility for compliance and auditing purposes and helps address data exfiltration. Seamless integration with existing Azure features: Network security perimeter works in conjunction with private endpoints by allowing Private endpoint traffic to storage accounts within a perimeter There is no additional cost to using network security perimeter. Real-world customer scenarios Let us explore how network security perimeters specifically strengthen the security and management of Azure Storage accounts through common applications. Create a Secure Boundary for Storage Accounts A leading financial organization sought to enhance the protection of sensitive client data stored in Azure Storage accounts. The company used Azure Monitor with a Log Analytics workspace to collect and centralize logs from all storage accounts, enabling constant monitoring and alerts for suspicious activity. This supported compliance and rapid incident response. They also used Azure Key Vault to access customer-managed encryption keys. They configured network access controls on each communication path from these resources to the storage account. They disabled public network access and employed a combination of Virtual Network (Vnet), firewall rules, private endpoints, and service endpoints. However, this created a huge overhead that had to be continuously managed as and when additional resources required access to the storage account. To address this, the company implemented network security perimeters, and blocked public and untrusted access to their storage account by default. By placing the specific Azure Key Vault and Log Analytics Workspace within the same network security perimeter as the storage account, the organization achieved a secure boundary around their data in an efficient manner. Additionally, to let an authorized application access this data, they defined an inbound access rule in the profile governing their storage account, thereby restricting access for the application to only the required PaaS resources. Prevent Data Exfiltration from Storage Accounts One of the most dangerous data exfiltration attacks is when an attacker obtains the credentials to a user account with access to an Azure Storage account, perhaps through phishing or credential stuffing. In a traditional setup, this attacker could potentially connect from anywhere on the internet and initiate large-scale data exfiltration to external servers, putting sensitive business or customer information at risk. With network security perimeter in place, however, only resources within perimeter or authorized external resources can access the storage account, drastically limiting the attacker’s options. Even if they have valid credentials, network security perimeter rules block the attacker’s attempts to connect from an unapproved network or unapproved machines within a compromised network. Furthermore, the perimeter enforces strict outbound traffic controls: storage accounts inside the perimeter cannot send data to any external endpoint unless a specific outbound rule permits it. Restricting inbound access and tightly controlling outbound data flows enhances the security of sensitive data in Azure Storage accounts. The presence of robust network access control on top of storage account credentials creates multiple hurdles for attackers to overcome and significantly reduces the risk of both unauthorized access and data exfiltration. Unified Access Management across the entire Storage estate A Large retailer found it difficult to manage multiple Azure Storage accounts. Typically, updating firewall rules or access permissions involved making repeated changes for each account or using complex scripts to automate the process. This approach not only increased the workload but also raised the risk of inconsistent settings or misconfigurations, which could potentially expose data. With network security perimeter, the retailed grouped storage accounts under a perimeter and sometimes using subsets of accounts under different perimeters. For accounts requiring special permissions within a single perimeter, the organization created separate profiles to customize inbound and outbound rules specific to them. Administrators could now define and update access policies at the profile level, with rules immediately enforced across every storage account and other resources associated with the profile. The updates consistently applied to all resources for both blocking public internet access and for allowing specific internal subscriptions, thus reducing gaps and simplifying operations. The network security perimeter also provided a centralized log of all network access attempts on storage accounts, eliminating the need for security teams to pull logs separately from each account. It showed what calls accessed accounts, when, and where, starting immediately after enabling logs in the “Transition” mode, and then continuing into Enforced mode. This streamlined approach enhances the organization’s compliance reporting, accelerated incident response, and improved understanding of information flow across the cloud storage environment. Getting started Explore this Quickstart guide, to implement a network security perimeter and configure the right profiles for your storage accounts. For guidance on usage and limitations related to Storage accounts, refer to the documentation. Network security perimeter does not have additional costs for using it. As you begin, consider which storage accounts to group under a perimeter, and how to segment profiles for special access needs within the perimeter.731Views2likes0CommentsProtect Azure Data Lake Storage with Vaulted Backups
We are thrilled to announce a limited public preview of vaulted backups for Azure Data Lake Storage. This is available now for test workloads and we’d like to get your feedback. Vaults are secure, encrypted copies of your data, enabling restoration to an alternate location in cases of accidental or malicious deletion. Vaulted backups are fully isolated from the source data, ensuring continuity for your business operations even in scenarios where the source data is compromised. This fully managed solution leverages the Azure Backup service to manage backups with automated retention and scheduling. By creating a backup policy, you can define a backup schedule and retention period. Based on this policy, Azure Backup service generates recovery points and manages the lifecycle of backups seamlessly. Ways vaulted backups protect your data: Isolation from Production Data – Vaulted backups are stored in a separate, Microsoft-managed tenant, preventing attackers from accessing both primary and backup data. Strict Access Controls – Backup management requires distinct permissions, ensuring segregation of duties and reducing insider threats. Advanced Security Features – With features like soft delete, immutability, and encryption, vaulted backups safeguard data against unauthorized modifications and premature deletions. Even if attackers compromise the primary storage account, backups remain secure within the vault, preserving data integrity and ensuring compliance. Alternate location recovery - Vaulted backups provide a reliable recovery solution by enabling restoration to an alternate storage account, ensuring business continuity even when the original account is inaccessible. Additionally, this capability allows organizations to create separate data copies for purposes such as testing, development, or analytics, without disrupting production environments. Granular recovery - With vaulted backups, you can restore the entire storage account or specific containers based on your needs. You can also use prefix matching to recover select blobs. With the growing frequency and sophistication of cyberattacks, protecting your data against loss or corruption is more critical than ever. Consider the following example use case where having vaulted backups can save the day. Enhanced Protection Against Ransomware Attacks Ransomware attacks can encrypt critical data, complicating recovery unless a ransom is paid. Vaulted backups offer an independent and secure recovery solution, allowing you to restore data without succumbing to attackers' demands. Accidental or Malicious Storage Account Deletion Human errors, insider threats, or compromised credentials can result in the deletion of entire storage accounts. Vaulted backups provide a crucial layer of protection by storing backups in Microsoft-managed storage, independent of your primary storage account. This ensures that an additional copy of your data remains intact, even if the original storage account is accidentally or maliciously deleted. Compliance Regulations Certain industries mandate offsite backups and long-term data retention to meet regulatory standards. Vaulted backups enable organizations to comply by offering offsite backup storage within the same Azure region as the primary storage account. With vaulted backups, data can be retained for up to 10 years. Getting started To enroll in the preview fill out this form. For more details, refer to this article. Vaulted backups can be configured for block blobs within HNS-enabled, standard general-purpose v2 ADLS storage accounts in specified regions here. Support for additional regions will be added incrementally. Currently, this preview is recommended exclusively for testing purposes. The Azure Backup protected instance fee and the vault backup storage fees are not currently charged. Now is a great time to give vaulted backups a try! Contact us If you have questions or feedback, please reach out to us at AskAzureBackupTeam@microsoft.com.465Views0likes0CommentsMicrosoft Purview Protection Policies for Azure Data Lake & Blob Storage Available in All Regions
Organizations today face a critical challenge: ensuring consistent and automated data governance across rapidly expanding data estates. Driven by the growth of AI and the increasing reliance on vast data volumes for model training, Chief Data Officers (CDOs) and Chief Information Security Officers (CISOs) must prevent unintentional exposure of sensitive data (PII, credit card information) while adhering to data and legal regulations. Many organizations rely on Azure Blob Storage and ADLS for storing vast amounts of data, offering scalable, secure, and highly available cloud storage solutions. While solutions like RBAC (role-based access control), ABAC (attribute-based access control), and ACLs (Access Control Lists) offer secure ways to manage data access, they can operate on metadata such as file paths, tags, or container names. These mechanisms are effective for implementing restrictive data governance by controlling who can access specific files or containers. However, there are scenarios were implementing automatic access controls based on the sensitivity of the content itself is necessary. For example, identifying and protecting sensitive information like credit card numbers within a blob requires more granular control. Ensuring that sensitive content is restricted to specific roles and applications across the organization is crucial, especially as enterprises focus on building new applications and infusing AI into current solutions. This is where integrated solutions like Microsoft Information Protection (MIP) come into play. Microsoft Information Protection (MIP) protection policies provide a solution by enabling organizations to scan and label data based on the content stored in the blob. This allows for applying access controls directly related to the data asset content across storage accounts. By eliminating the need for in-house scanning and labeling, MIP streamlines compliance and helps in applying consistent data governance using a centralized solution. The Solution: Microsoft Purview Information Protection (MIP) Protection Policies for Governance & Compliance Microsoft Purview Information Protection (MIP) provides an efficient and centralized approach to data protection by automatically restricting access to storage data assets based on sensitivity labels discovered through automated scanning and leveraging Protection policies (learn more). This feature builds upon Microsoft Purview's existing capability (learn more) to scan and label sensitive data assets, ensuring robust data protection. This not only enhances data governance but also ensures that data is managed in a way that protects sensitive information, reducing the risk of unauthorized access and maintaining the security and trust of customers. Enhancing Data Governance with MIP Protection policies: Contoso, a multinational corporation, handles large volumes of data stored in Azure Storage (Blob/ADLS). Different users, such as financial auditors, legal advisors, compliance officers, and data analysts, need access to different blobs in the Storage account. These blobs are updated daily with new content, and there can be sensitive data across these blobs. Given the diverse nature of the stored data, Contoso needed an access control method that could restrict access based on data asset sensitivity. For instance, data analysts access the blob named "logs" where log files are uploaded. If these files contain PII or financial data, which should only be accessed by financial officers, the access permissions need to be dynamically updated based on the changing sensitivity of the stored data. MIP protection policies can address this challenge efficiently by automatically limiting access to data based on sensitivity labels found through automated scanning. Key Benefits: Auto-labelling: Automatically apply sensitivity labels to Azure Storage based on detection of sensitive information types. Automated Protection: Automatically restrict access to data with specific sensitivity labels, ensuring consistent data protection. Storage Data Owners can selectively enable specific storage accounts for policy enforcement, providing flexibility and control. Like a protection policy that restricted access to data labeled as "Highly Confidential" to only specific groups or users. For instance, blobs labeled with "logs" were accessible only to data analysts. With MIP, the labels are updated based on content changes, and the protection policy can deny access if the content if any “Highly Confidential” data is identified. Enterprise-level Control: Information Protection policies are applied to blobs and resource sets, ensuring that only authorized Azure Entra ID users or M365 user groups can access sensitive data. Unauthorized users will be prevented from reading the blob or resource set. Centralized Policy Management: Create, manage, and enforce protection policies across Azure Storage from a single, unified interface in Microsoft Purview. Enterprise admins have granular control over which storage accounts enforce protection coverage based on the account’s sensitivity label. By using Microsoft Purview Information Protection (MIP) Protection Policies, Contoso was able to achieve secure and consistent data governance, and centralized policy management, effectively addressing their data security challenges Prerequisites Microsoft 365 E5 licenses and setup of pay as you go billing model. To understand pay as you go billing by assets protected, see the pay-as-you-go billing model. For information about the specific licenses required, see this information on sensitivity labels. Microsoft 365 E5 trial licenses can be attained for your tenant by navigating here from your environment. Getting Started The public preview of Protection Policies supports the following Azure Storage services: Azure Blob Storage Azure Data Lake Storage To enable Protection Policies for your Azure Storage accounts: Navigate to the Microsoft Purview portal> Information Protection card > Policies. Configure or use an existing sensitivity label in Microsoft Purview Information Protection that’s scoped to “Files & other data assets” Create an auto-labelling to apply a specific sensitivity label to scoped assets in Azure Storage based on Microsoft out-of-the-box sensitive info types detected. Run scans on assets for auto-labelling to apply. Create a protection policy and associate it with your desired sensitivity labels. Apply the policy to your Azure Blob Storage or ADLS Gen2 accounts. Limitations During the public preview, please note the following limitations: Currently a maximum of 10 storage accounts are supported in one protection policy, and they must be selected under Edit for them to be enabled. Changing pattern rules will re-apply labels on all storage accounts. During the public preview, there might be delays in label synchronization, which could prevent MIP policies from functioning effectively. If customer storage account enables CMK, the storage account MIP policy will not work. Next Steps With the Public Preview, MIP Protection policies is now available in all regions, and any storage account registered on the Microsoft Purview Data Map can create and apply protection policies to implement consistent data governance strategies across their data in Azure Storage. We encourage you to try out this feature and provide feedback. Your input is crucial in shaping this feature as we work towards general availability.1.9KViews0likes0CommentsControl geo failover for ADLS and SFTP with unplanned failover.
We are excited to announce the General Availability of customer managed unplanned failover for Azure Data Lake Storage and storage accounts with SSH File Transfer Protocol (SFTP) enabled. What is Unplanned Failover? With customer managed unplanned failover, you are in control of initiating your failover. Unplanned failover allows you to switch your storage endpoints from the primary region to the secondary region. During an unplanned failover, write requests are redirected to the secondary region, which then becomes the new primary region. Because an unplanned failover is designed for scenarios where the primary region is experiencing an availability issue, unplanned failover happens without the primary region fully completing replication to the secondary region. As a result, during an unplanned failover there is a possibility of data loss. This loss depends on the amount of data that has yet to be replicated from the primary region to the secondary region. Each storage account has a ‘last sync time’ property, which indicates the last time a full synchronization between the primary and the secondary region was completed. Any data written between the last sync time and the current time may only be partially replicated to the secondary region, which is why unplanned failover may incur data loss. Unplanned failover is intended to be utilized during a true disaster where the primary region is unavailable. Therefore, once completed, the data in the original primary region is erased, the account is changed to locally redundant storage (LRS) and your applications can resume writing data to the storage account. If the previous primary region becomes available again, you can convert your account back to geo-redundant storage (GRS). Migrating your account from LRS to GRS will initiate a full data replication from the new primary region to the secondary which has geo-bandwidth costs. If your scenario involves failing over while the primary region is still available, consider planned failover. Planned failover can be utilized in scenarios including planned disaster recovery testing or recovering from non-storage related outages. Unlike unplanned failover, the storage service endpoints must be available in both the primary and secondary regions before a planned failover can be initiated. This is because planned failover is a 3-step process that includes: (1) making the current primary read only, (2) syncing all the data to the secondary (ensuring no data loss), and (3) swapping the primary and secondary regions so that writes are now in the new region. In contrast with unplanned failover, planned failover maintains the geo-redundancy of the account so planned failback does not require a full data copy. To learn more about planned failover and how it works view, Public Preview: Customer Managed Planned Failover for Azure Storage | Microsoft Community Hub To learn more about each failover option and the primary use case for each view, Azure storage disaster recovery planning and failover - Azure Storage | Microsoft Learn How to get started? Getting started is simple, to learn more about the step-by-step process to initiate an unplanned failover review the documentation: Initiate a storage account failover - Azure Storage | Microsoft Learn Feedback If you have questions or feedback, reach out at storagefailover@service.microsoft.com445Views0likes0CommentsDremio Cloud on Microsoft Azure enables customers to drive value from their data more easily
Dremio Cloud on Microsoft Azure enables customers to drive value from their data more easily. It helps overcoming challenges of grown data lake and database landscapes. Particularly in hybrid environments it allows to shield change from business while at the same time tightening security and eases application integration.2.6KViews1like0Comments