azure blob storage
75 TopicsTLS 1.0 and 1.1 support will be removed for new & existing Azure storage accounts starting Feb 2026
To meet evolving technology and regulatory needs and align with security best practices, we are removing support for Transport Layer Security (TLS) 1.0 and 1.1 for both existing and new storage accounts in all clouds. TLS 1.2 will be the minimum supported TLS version for Azure Storage starting February 2026. Azure Storage currently supports TLS 1.0 and 1.1 (for backward compatibility) and TLS 1.2 on public HTTPS endpoints. TLS 1.2 is more secure and faster than older TLS versions. TLS 1.0 and 1.1 do not support modern cryptographic algorithms and cipher suites. Many of the Azure storage customers are already using TLS 1.2 and we are sharing this guidance to expedite the transition for customers currently on TLS 1.0 and 1.1. Customers must secure their infrastructure by using TLS 1.2+ with Azure Storage by Jan 31, 2026. The older TLS versions (1.0 and 1.1) are being deprecated and removed to meet evolving standards (FedRAMP, NIST), and provide improved security for our customers. This change will impact both existing and new storage accounts using TLS 1.0 and 1.1. To avoid disruptions to your applications connecting to Azure Storage, you must migrate to TLS 1.2 and remove dependencies on TLS version 1.0 and 1.1, by Jan 31, 2026. Learn more about how to migrate to TLS1.2. As best practice, we also recommend using Azure policy to enforce a minimum TLS version. Learn more here about how to enforce a minimum TLS version for all incoming requests. If you already use Azure Policy to enforce TLS version, minimum supported version after this change rolls out will be TLS 1.2. Help and Support If you have questions, get answers from community experts in Microsoft Q&A. If you have a support plan and you need technical help, create a support request: For Issue type, select Technical. For Subscription, select your subscription. For Service, select My services. For Service type, select Blob Storage. For Resource, select the Azure resource you are creating a support request for. For Summary, type a description of your issue. For Problem type, select Connectivity For Problem subtype, select Issues using TLS.56KViews2likes5CommentsHow Microsoft Azure and Qumulo Deliver a Truly Cloud-Native File System for the Enterprise
Disclaimer: The following is a post authored by our partner Qumulo. Qumulo has been a valued partner in the Azure Storage ecosystem for many years and we are happy to share details on their unique approach to solving challenges of scalable filesystems! Whether you’re training massive AI models, running HPC simulations in life sciences, or managing unstructured media archives at scale, performance is everything. Qumulo and Microsoft Azure deliver the cloud-native file system built to handle the most data-intensive workloads, with the speed, scalability, and simplicity today's innovators demand. But supporting modern workloads at scale is only part of the equation. Qumulo and Microsoft have resolved one of the most entrenched and difficult challenges in modernizing the enterprise data estate: empowering file data with high performance across a global workforce without impacting the economics of unstructured data storage. According to Gartner, global end-user spending on public cloud services is set to surpass $1 trillion by 2027. That staggering figure reflects more than just a shift in IT budgets—it signals a high-stakes race for relevance. CIOs, CTOs, and other tech-savvy execs are under relentless pressure to deliver the capabilities that keep businesses profitable and competitive. Whether they’re ready or not, the mandate is clear: modernize fast enough to keep up with disruptors, many of whom are using AI and lean teams to move at lightning speed. To put it simply, grow margins without getting outpaced by a two-person startup using AI in a garage. That’s the challenge leaders face every day. Established enterprises must contend with the duality of maintaining successful existing operations and the potential disruption to those operations by a more agile business model that offers insight into the next wave of customer desires and needs. Nevertheless, established enterprises have a winning move - unleash the latent productivity increases and decision-making power hidden within years, if not decades, worth of data. Thoughtful CIOs, CTOs, and CXOs have elected to move slowly in these areas due to the tyranny of quarterly results and the risk of short-term costs reflecting poorly on the present at the expense of the future. In this sense, adopting innovative technologies forced organizations to choose between self-disruption with long-term benefits or non-disruptive technologies with long-term disruption risk. When it comes to network-attached storage, CXOs were forced to accept non-disruptive technologies because the risk was too high. This trade-off is no longer required. Microsoft and Qumulo have addressed this challenge in the realm of unstructured file data technologies by delivering a cloud-native architecture that combines proven Azure primitives with Qumulo’s suite of file storage solutions. Now, those patient CXOs, waiting to adopt hardened technologies, can shift their file data paradigm into Azure while improving business value, data portability, and reducing the financial burden on their business units. Today, organizations that range from 50,000+ employees with global offices, to organizations with a few dozen employees with unstructured data-centric operations have discovered the incredible performance increases, data availability, accessibility, and economic savings realized when file data moves into Azure using one of two Qumulo solutions: Option 1 — Azure Native Qumulo (ANQ) is a fully managed file service that delivers truly elastic capacity, throughput, and IOPS, along with all the enterprise features of your on-premises NAS and a TCO to match. Option 2 — Cloud Native Qumulo (CNQ) on Microsoft Azure is a self-hosted file data service that offers the performance and scale your most demanding workloads require, at a comparable total cost of ownership to on-premises storage. Both CNQ on Microsoft Azure and ANQ offer the flexibility and capacity of object storage while remaining fully compatible with file-based workflows. As data platforms purpose-built for the cloud, CNQ and ANQ provide three key characteristics: Elasticity — Performance and capacity can scale independently, both up and down, dynamically. Boundless Scale — Virtually no limitations on file system size or file count, with full multi-protocol support. Utility-Based Pricing — Like Microsoft Azure, Qumulo operates on a pay-as-you-go model, charging only for resources used without requiring pre-provisioned capacity or performance. The collaboration between Qumulo’s cloud-native file solutions and the Microsoft Azure ecosystem enables seamless migration of a wide range of workflows, from large-scale archives to high-performance computing (HPC) applications, from on-premises environments to the cloud. For example, a healthcare organization running a fully cloud-hosted Picture Archiving and Communication System (PACS) alongside a Vendor Neutral Archive (VNA) can leverage Cloud Native Qumulo (CNQ) to manage medical imaging data in Azure. CNQ offers a HIPAA-compliant, highly durable, and cost-efficient platform for storing both active and infrequently accessed diagnostic images, enabling secure access while optimizing storage costs. With Azure’s robust cloud infrastructure, organizations can design a cloud file solution that scales to meet virtually any size or performance requirement, while unlocking new possibilities in cloud-based AI and HPC workloads. Further, using the Qumulo Cloud Data Fabric, the enterprise is able to connect geographically separated data sources within one unified, strictly consistent (POSIX-compliant), secure, and high-performance file system. As organizational needs evolve — whether new workloads are added or existing workloads expand — Cloud Native Qumulo or Azure Native Qumulo can easily scale to meet performance demands while maintaining the predictable economics that meet existing or shrinking budgets. About Azure Native Qumulo and Cloud Native Qumulo on Azure Azure Native Qumulo (ANQ) and Cloud Native Qumulo (CNQ) enable organizations to leverage a fully customizable, multi-protocol solution that dynamically scales to meet workload performance requirements. Engineered specifically for the cloud, ANQ is designed for simplicity of operation and automatic scalability as a fully managed service. CNQ offers the same great technology, directly leveraging cloud-native resources like Azure Virtual Machines (VMs), Azure Networking, and Azure Blob Storage to provide a scalable platform that adapts to the evolving needs of today’s workloads – but deploys entirely in the enterprise tenant, allows for direct control over the underlying infrastructure, and requires a little bit higher level of internal expertise to operate. Azure Native Qumulo and Cloud Native Qumulo on Azure also deliver a fully dynamic file storage platform that is natively integrated with the Microsoft Azure backend. Here’s what sets ANQ and CNQ apart: Elastic Scalability — Each ANQ and CNQ instance on Azure Blob Storage can automatically scale to exabyte-level storage within a single namespace by simply adding data. On Microsoft Azure, performance adjustments are straightforward: just add or remove compute instances to instantly boost throughput or IOPS, all without disruption and within minutes. Plus, you pay only for the capacity and compute resources you use. Deployed in Minutes — ANQ deploys from the Azure Portal, CLI, or PowerShell, just like a native service. CNQ runs in your own Azure virtual network and can be deployed via Terraform. You can select the compute type that best matches your workload’s performance requirements and build a complete file data platform on Azure in under six minutes for a three-node cluster. Automatic TCO Management — can be facilitated through services like Komprise Intelligent Tiering for Azure and Azure Blob Storage access tiers. It optimizes storage costs and manages data lifecycle. By analyzing data access patterns, these systems move files or objects to appropriate tiers, reducing costs for infrequently accessed data. Additionally, all data written to CNQ is compressed to ensure maximum cost efficiency. ANQ automatically adapts to your workload requirements, and CNQ’s fully customizable architecture can be configured to meet the specific throughput and IOPS requirements of virtually any file or object-based workload. You can purchase either ANQ or CNQ through a pay-as-you-go model, eliminating the need to pre-provision cloud file services. Simply pay for what you use. ANQ and CNQ deliver comparable performance and services to on-premises file storage at a similar TCO. Qumulo’s cloud-native architecture redefines cloud storage by decoupling capacity from performance, allowing both to be adjusted independently and on demand. This provides the flexibility to modify components such as compute instance type, compute instance count, and cache disk capacity — enabling rapid, non-disruptive performance adjustments. This architecture, which includes the innovative Predictive Cache, delivers exceptional elasticity and virtually unlimited capacity. It ensures that businesses can efficiently manage and scale their data storage as their needs evolve, without compromising performance or reliability. ANQ and CNQ retain all the core Qumulo functionalities — including real-time analytics, robust data protection, security, and global collaboration. Example architecture In the example architecture, we see a solution that uses Komprise to migrate file data from third-party NAS systems to ANQ. Komprise provides platform-agnostic file migration services at massive scale in heterogeneous NAS environments. This solution facilitates the seamless migration of file data between mixed storage platforms, providing high-performance data movement, ensuring data integrity, and empowering you to successfully complete data migration projects from your legacy NAS to an ANQ instance. Figure: Azure Native Qumulo’s exabyte-scale file data platform and Komprise Beyond inherent scalability and dynamic elasticity, ANQ and CNQ support enterprise-class data management features such as snapshots, replication, and quotas. ANQ and CNQ also offer multi-protocol support — NFS, SMB, FTP, and FTP-S — for file sharing and storage access. Additionally, Azure supports a wide range of protocols for various services. For authentication and authorization, it commonly uses OAuth 2.0, OpenID Connect, and SAML. For IoT, MQTT, AMQP, and HTTPS are supported for device communication. By enabling shared access to the same data via all protocols, ANQ and CNQ support collaborative and mixed-use workloads, eliminating the need to import file data into object storage. Qumulo consistently delivers low time-to-first-byte latencies of 1–2ms, offering a combined file and object platform for even the most performance-intensive AI and HPC workloads. ANQ and CNQ can run in all Azure regions (although ANQ operates best in regions with three availability zones), allowing your on-premises data centers to take advantage of Azure’s scalability, reliability, and durability. ANQ and CNQ can also be dynamically reconfigured without taking services offline, so you can adjust performance — temporarily or permanently — as workloads change. An ANQ or CNQ instance deployed initially as a disaster recovery or archive target can be converted into a high-performance data platform in seconds, without redeploying the service or migrating hosted data. If you already use Qumulo storage on-premises or in other cloud platforms, Qumulo’s Cloud Data Fabric enables seamless data movement between on-premises, edge, and Azure-based deployments. Connect portals between locations to build a Global Namespace and instantly extend your on-premises data to Azure’s portfolio of cloud-native applications, such as Microsoft Copilot, AI Studio, Microsoft Fabric, and high-performance compute and GPU services for burst rendering or various HPC engines. Cloud Data Fabric moves files through a large-scale data pipeline instantly and seamlessly. Use Qumulo’s continuous replication engine to enable disaster recovery scenarios, or combine replication with Qumulo’s cryptographically locked snapshot feature to protect older versions of critical data from loss or ransomware. ANQ and CNQ leverage Azure Blob’s 11-nines durability to achieve a highly available file system and utilizes multiple availability zones for even greater availability — without the added costs typically associated with replication in other file systems. Conclusion The future of enterprise storage isn’t just in the cloud — it’s in smart, cloud-native infrastructure that scales with your business, not against it. Azure Native Qumulo (ANQ) and Cloud Native Qumulo (CNQ) on Microsoft Azure aren’t just upgrades to legacy storage — they’re a reimagining of what file systems can do in a cloud-first world. Whether you're running AI workloads, scaling HPC environments, or simply looking to escape the limitations of aging on-prem NAS, ANQ and CNQ give you the power to do it without compromise. With elastic performance, utility-based pricing, and native integration with Azure services, Qumulo doesn’t just support modernization — it accelerates it. To help you unlock these benefits, the Qumulo team is offering a free architectural assessment tailored to your environment and workloads. If you’re ready to lead, not lag, and want to explore how ANQ and CNQ can transform your enterprise storage, reach out today by emailing Azure@qumulo.com. Let’s build the future of your data infrastructure together.447Views1like0Comments📢 [Public Preview] Accelerating BlobNFS throughput & scale with FUSE for superior performance
Azure Blob Storage can be mounted and accessed like a local file system using BlobFuse, which is a FUSE-based driver for the Blob REST API. Customers choose BlobFuse for AI/ML, HPC, analytics and backup workloads. It provides exceptionally high throughput along with benefits like local caching and security integration via Azure Entra ID. For customers requiring NFS 3.0 protocol support or POSIX compliance, Azure Blob Storage also natively supports NFSv3 (aka BlobNFS). It enables Azure Blob storage access for customers’ legacy applications without requiring changes. BlobNFS is accessed via the Linux NFS client combined with our AZNFS mount helper package, which streamlines mounting and reliably connecting to Blob Storage’s NFS endpoints. Please note that BlobNFS only supports access over a virtual network since Azure Entra ID based auth isn’t yet available on NFS 3.0. Today, we’re excited to announce an update to AZNFS (3.0) for BlobNFS, which now uses the same libfuse3 library that powers BlobFuse bringing significant improvements in performance and scale. The updated AZNFS for BlobNFS delivers significantly higher throughput, larger file support, better metadata performance, and removes user group limits, enhancing performance for demanding workloads. Maximize Virtual Machine throughput: AZNFS now supports up to 256 TCP connections (up from16 in native NFS client) allowing throughput to reach VM NIC bandwidth (the maximum data transfer rate of virtual machine’s network interface card) or storage account limits. This benefits HPC workloads by ensuring high throughput for large dataset operations. Additionally, a small number (4 or fewer) of parallel file reads/writes can now fully saturate the VM NIC bandwidth even for larger VM sizes. Enhanced read/write speed: The updated AZNFS client outperforms native NFS client for read and write scenarios. For example, single file read/write performance is improved by a factor of 5x and 3x respectively, which can be useful for large database backup tasks requiring high single file throughput for writing and reading backup files. Refer to the link for a detailed performance comparison. Removal of the user's group limit: Linux NFS clients with a local Identity server can pass access permissions for up to 16 groups of a user, restricting resource access for users belonging to more than 16 groups. This update allows FUSE to handle permission checks, removing the 16-group limitation. Improved metadata query performance: READDIR can query more directory entries in one call. The Linux client has a limit of 1MB, whereas the updated AZNFS can now reach up to 3 MB. Customers with numerous files will experience quicker listing and metadata operations with reduced latency. This will be beneficial for workloads like EDA (Electronic Design Automation) and HPC (High Performance Computing) which often involve reading metadata for considerable number of files before selecting a subset for processing. Support for large file sizes (up to 5TB): The new release can support larger file sizes for sequential write patterns. Due to larger block sizes possible with AZNFS, users can create larger files up to the 5TB limit. With Linux clients, under best conditions, the max. file sizes were limited to ~3TB. CAD tools producing simulation and checkpoint data files over 3TB will benefit from this improvement. The following charts compare performance between updated AZNFS and the Native Linux client. Please refer the detailed benchmarks for more details. [Test parameters - VM: Standard D96ds v5, File size: 100GB, Linux NFS is with nconnect =16, Linux kernel 5.x.x ; Test used: DD test] Note: The VM supports higher read throughput than write throughput. For updated AZNFS, throughput starting from 4 parallel file read/write operations is constrained by VM NIC bandwidth, or it can scale higher. Getting Started Please register for the preview using this form. Please refer the link for instructions on how to install and use latest version of AZNFS. For any queries or feedback, please contact us at aznfs@microsoft.com. References: What is BlobFuse? - BlobFuse2 - Azure Storage | Microsoft Learn Network File System (NFS) 3.0 protocol support for Azure Blob Storage Mount Blob Storage by using the Network File System (NFS) 3.0 protocol on Linux Instructions to install and use latest version of AZNFS · Azure/AZNFS-mount Wiki697Views0likes0CommentsBuilding a Scalable Web Crawling and Indexing Pipeline with Azure storage and AI Search
In the ever-evolving world of data management, keeping search indexes up-to-date with dynamic data can be challenging. Traditional approaches, such as manual or scheduled indexing, are resource-intensive, delay-prone, and difficult to scale. Azure Blob Trigger combined with an AI Search Indexer offers a cutting-edge solution to overcome these challenges, enabling real-time, scalable, and enriched data indexing. This blog explores how Blob Trigger, integrated with Azure Cognitive Search, transforms the indexing process by automating workflows and enriching data with AI capabilities. It highlights the step-by-step process of configuring Blob Storage, creating Azure Functions for triggers, and seamlessly connecting with an AI-powered search index. The approach leverages Azure's event-driven architecture, ensuring efficient and cost-effective data management.1.7KViews7likes10CommentsAutomating Data Management: Azure Storage Actions Overview
Azure Storage Data Management Solutions/Services: 1. Movement: An end-to-end experience to discover, plan and move data into Azure in a performant, cost effective, secure, and reliable way 2. Insights: Store and manage metrics and metadata enabling deep insights into the data estate 3. Actions: Flexible, scalable, serverless platform to effortlessly process data for data management, protection, security and governance Big Data Challenges: Organizations must manage ever-increasing data volumes Data Management Data Movement Tagging & Classification Security and Access Control Data Protection Orchestration Customer challenges: Customers have increasingly large volumes of data in hundreds of storage accounts with billions of objects Challenging to process millions of objects for bulk operations Lifecycle management, data protection, object tagging and security operations require increasing complexity Out of box policies in storage can be constricted and extensibility is limited Introducing Azure Storage Actions A fully managed platform that helps you automate data management tasks.Process billions of blobs in your storage account effortlessly. Supports Azure Blob Storage and Azure Data Lake Storage How Storage Actions Works? Event-Condition-Action framework Schedule-based and on-demand execution Conditional processing of blobs based on blob properties Use native blob operations as actions on the blob Serverless Fully managed infrastructure Deploy in minutes – eliminates need for any complex software or infrastructure Auto-scales with your storage No-code composition & simplified management Use clicks to compose tasks Easily apply tasks to multiple storage accounts Monitor task execution across your storage execution with aggregate metrics and drilldowns Storage Actions Overview Data Protection – blob immutability, legal holds and blob expiry Cost optimization – tiering or deleting blobs Managing blob tags Undelete blobs Copy blobs, folder operations Key Concepts How to start with Storage Actions: Login to Portal: Search for Azure Storage Actions Create a task à Define Conditions: [[and(endsWith(Name, 'pdf'), equals(BlobType, 'BlockBlob'))]] The query [[and(endsWith(Name, 'pdf'), equals(BlobType, 'BlockBlob'))]] is a logical expression used to filter and retrieve specific items from a dataset. Here's a breakdown of its components: endsWith(Name, 'pdf'): This part of the query checks if the Name attribute of an item ends with the string 'pdf'. Essentially, it filters items whose names end with .pdf, indicating that they are PDF files. equals(BlobType, 'BlockBlob'): This part of the query checks if the BlobType attribute of an item is equal to 'BlockBlob'. This is used to filter items that are of the type BlockBlob, which is a type of storage blob in cloud storage systems. and(...): The and operator combines the two conditions above. It ensures that only items meeting both criteria are retrieved. In other words, the query will return items that are PDF files (Name ends with .pdf) and are of the type BlockBlob. In summary, this query is used to find items that are PDF files stored as BlockBlob in a dataset. If above query verified, it would set tags process = true, and set blob immutability policy to locked Create Assignment: Once task will run it will create report which will get added in Storage account container Example Use Cases Retention Management: Automatically manage the retention and expiry durations of audio files using a combination of index tags and creation times 4. Version History Management: Manage the retention and lifecycle of datasets using metadata and tags for optimal protection and cost 4. One-off Processing: Define tasks to rehydrate large datasets from the archive tier, reset tags on part of a dataset, or clean-up redundant and outdated datasets408Views1like0CommentsHolding forensic evidence: The role of hybrid cloud in successful preservation and compliance
Disclaimer: The following is a post authored by our partner Tiger Technology. Tiger Technology has been a valued partner in the Azure Storage ecosystem for many years and we are happy to have them share details on their innovative solution! Police departments worldwide are grappling with a digital explosion. From body camera footage to social media captures, the volume and variety of evidence have surged, creating a storage, and management challenge like never before. A single police department needing to store 2–5 petabytes of data—and keep some of it for 100 years. How can they preserve the integrity of this data, make it cost-effective, and ensure compliance with legal requirements? The answer lies in hybrid cloud solutions, specifically Microsoft Azure Blob Storage paired with Tiger Bridge. These solutions are empowering law enforcement to manage, and store evidence at scale, without disrupting workflows. But what exactly is hybrid cloud, and why is it a game-changer for digital evidence management? What is a hybrid cloud? A hybrid cloud combines public or private cloud services with on-premises infrastructure. It gives organizations the flexibility to mix, and match environments, allowing them to choose the best fit for specific applications, and data. This flexibility is especially valuable in highly regulated industries like law enforcement, where strict data privacy, and compliance rules govern how evidence is stored, processed, and accessed. Hybrid cloud also facilitates a smoother transition to public cloud solutions. For instance, when a data center reaches capacity, hybrid setups allow agencies to scale dynamically while maintaining control over their most sensitive data. It’s not just about storage—it's about creating a robust, compliant infrastructure for managing enormous volumes of evidence. What makes digital evidence so complex? Digital evidence encompasses any information stored or transmitted in binary form that can be used in court. It includes computer hard drives, phone records, social media posts, surveillance footage, etc. The challenge isn’t just collecting this data—it’s preserving its integrity. Forensic investigators must adhere to strict chain-of-custody protocols to prove in court that the evidence: Is authentic and unaltered, Has been securely stored with limited access, Is readily available when needed. With the surge in data volumes and complexity, traditional storage systems often fall short. That’s where hybrid cloud solutions shine, offering scalable, secure, and cost-effective options that keep digital evidence admissible while meeting compliance standards. The challenges police departments face Digital evidence is invaluable. Storing and managing it is a challenging task, and requires dealing with several aspects: Short-term storage problems The sheer scale of data can overwhelm local systems. Evidence must first be duplicated using forensic imaging to protect the original file. But housing these duplicates, especially with limited budgets, strains existing resources. Long-term retention demands In some jurisdictions, evidence must be retained for decades—sometimes up to a century. Physical storage media, like hard drives or SSDs, degrade over time and are expensive to maintain. Transitioning this data to cloud cold storage offers a more durable and cost-effective solution. Data integrity and legal admissibility Even the slightest suspicion of tampering can render evidence inadmissible. Courts require robust proof of authenticity and integrity, including cryptographic hashes and digital timestamps. Failing to maintain a clear chain of custody could jeopardize critical cases. Solving the storage puzzle with hybrid cloud For law enforcement agencies, managing sensitive evidence isn't just about storage—it's about creating a system that safeguards data integrity, ensures compliance, and keeps costs under control. Traditional methods fall short in meeting these demands as the volume of digital evidence continues to grow. This is where hybrid cloud technology stands out, offering a powerful combination of on-premises infrastructure and cloud capabilities. Microsoft Azure, a leader in cloud solutions, brings critical features to the table, ensuring evidence remains secure, accessible, and compliant with strict legal standards. But storage alone isn't enough. Efficient file management is equally crucial for managing vast datasets while maintaining workflow efficiency. Tools like Tiger Bridge complement Microsoft Azure by bridging the gap between local and cloud storage, adding intelligence and flexibility to how evidence is preserved and accessed. Microsoft Azure Blob Storage Azure Blob Storage is massively scalable and secure object storage. For the purposes of law enforcement, among other features, it offers: Automatic Tiering: Automatically moves data between hot and cold tiers, optimizing costs, Durability: Up to sixteen 9s (99.99999999999999%) of durability ensures data integrity for decades. Metadata management: Add custom tags or blob indexes, such as police case classifications, to automate retention reviews. Microsoft Azure ensures evidence is secure, accessible, and compliant with legal standards. Tiger Bridge: Smart File Management Tiger Bridge enhances Microsoft Azure’s capabilities by seamlessly integrating local and cloud storage with powerful features tailored for forensic evidence management. Tiger Bridge is a software-only solution that integrates seamlessly with Windows servers. It handles file replication, space reclaiming, and archiving—all while preserving existing workflows and ensuring data integrity and disaster recovery. With Tiger Bridge, police departments can transition to hybrid cloud storage without adding hardware or altering processes. Data replication Tiger Bridge replicates files from on-premises storage to cloud storage, ensuring a secure backup. Replication policies run transparently in the background, allowing investigators to work uninterrupted. Files are duplicated based on user-defined criteria, such as priority cases or evidence retention timelines. Space reclamation Once files are replicated to the cloud, Tiger Bridge replaces local copies with “nearline” stubs. These stubs look like the original files but take up virtually no space. When a file is needed, it’s automatically retrieved from the cloud, reducing storage strain on local servers. Data archiving For long-term storage, Tiger Bridge moves files from hot cloud tiers to cold and / or archive storage. Files in the archive tier are replaced with "offline" stubs. These files are not immediately accessible but can be manually retrieved and rehydrated when necessary. This capability allows law enforcement agencies to save on costs while still preserving access to critical evidence. Checksum for data integrity On top of strong data integrity and data protection features already built-in in Azure Storage Blob service, Tiger Bridge goes a step further in ensuring data integrity by generating checksums for newly replicated files. These cryptographic signatures allow agencies to verify that files in the cloud are identical to the originals stored on premises. This feature is essential for forensic applications, where the authenticity of evidence must withstand courtroom scrutiny. Data integrity verification is done during uploads and retrievals, ensuring that files remain unaltered while stored in the cloud. For law enforcement, checksum validation provides peace of mind, ensuring that evidence remains admissible in court and meets strict regulatory requirements Disaster Recovery In the event of a local system failure, Tiger Bridge allows for immediate recovery. All data remains accessible in the cloud, and reinstalling Tiger Bridge on a new server re-establishes access without needing to re-download files. A real-life scenario Imagine a police department dealing with petabytes of video evidence from body cameras, surveillance footage, and digital device extractions. A simple, yet effective typical real-life scenario follows the similar patterns: Investigators collect and image evidence files, Tiger Bridge replicates this data to Azure Blob Storage, following predefined rules, Active cases remain in Azure’s hot tier, while archival data moves to cost-effective cold storage, Metadata tags in Azure help automate case retention reviews, flagging files eligible for deletion. This approach ensures evidence is accessible when needed, secure from tampering, and affordable to store long-term. The results speak for themselves. Adopting a hybrid cloud strategy delivers tangible benefits: Operational efficiency: Evidence is readily accessible without the need for extensive hardware investments and maintenance. Cost savings: Automating data tiering reduces storage costs while maintaining accessibility. Workflow continuity: Investigators can maintain existing processes with minimal disruption. Enhanced compliance: Robust security measures and chain-of-custody tracking ensure legal standards are met. A future-proof solution for digital forensics As digital evidence grows in both volume and importance, police organizations must evolve their storage strategies. Hybrid cloud solutions like Azure Blob Storage and Tiger Bridge offer a path forward: scalable, secure, and cost-effective evidence management designed for the demands of modern law enforcement. The choice is clear: Preserve the integrity of justice by adopting tools built for the future. About Tiger Technology Tiger Technology helps organizations with mission-critical deployments optimize their on-premises storage and enhance their workflows through cloud services. The company is a validated ISV partner for Microsoft in three out of five Azure Storage categories: Primary and Secondary Storage; Archive, Backup and BCDR, and Data Governance, Management, and Migration. Tiger Bridge SaaS offering on Azure Marketplace is an Azure benefit-eligible, data management software enabling seamless hybrid cloud infrastructure. Installed in the customer’s on-premises or cloud environment, Tiger Bridge intelligently connects file data across file and object storage anywhere for data lifecycle management, global file access, Disaster Recovery, data migration and access to insights. Tiger Bridge supports all Azure Blob Storage tiers, including cold and archive tiers for long-term archival of data. Read more by Tiger Technology on the Tech Community Blog: Modernization through Tiger Bridge Hybrid Cloud Data Services On-premises-first hybrid workflows in healthcare. Why start with digital pathology?315Views0likes0CommentsAnnouncing General Availability of Next generation Azure Data Box Devices
Today, we’re excited to announce the General Availability of Azure Data Box 120 and Azure Data Box 525, our next-generation compact, NVMe-based Data Box devices. These devices are currently available for customers to order in the US, US Gov, Canada, EU and the UK Azure regions, with broader availability coming soon. Since the preview announcement at Ignite '24, we have successfully ingested petabytes of data, encompassing multiple orders serving customers across various industry verticals. Customers have expressed delight over the reliability and efficiency of the new devices with up to 10x improvement in data transfer rates, highlighting them as a valuable and essential asset for large-scale data migration projects. These new device offerings reflect insights gained from working with our customers over the years and understanding their evolving data transfer needs. They incorporate several improvements to accelerate offline data transfers to Azure, including: Fast copy - Built with NVMe drives for high-speed transfers and improved reliability and support for faster network connections Ease of use - larger capacity offering (525 TB) in a compact form-factor for easy handling Resilient - Ruggedized devices built to withstand rough conditions during transport Secure - Enhanced physical, hardware and software security features Broader availability – Presence planned in more Azure regions, meeting local compliance standards and regulations What’s new? Improved Speed & Efficiency NVMe-based devices offer faster data transfer rates, providing a 10x improvement in data transfer speeds to the device as compared to previous generation devices. With a dataset comprised of mostly large (TB-sized) files, on average half a petabyte can be copied to the device in under two days. High-speed transfers to Azure with data upload up to 5x faster for medium to large files, reducing the lead time for your data to become accessible in the Azure cloud. Improved networking with support for up to 100 GbE connections, as compared to 10 GbE on the older generation of devices. Two options with usable capacity of 120 TB and 525 TB in a compact form factor meeting OSHA requirements. Devices ship the next day air in most regions. Learn more about the performance improvements on Data Box 120 and Data Box 525. Enhanced Security The new devices come with several new physical, hardware and software security enhancements. This is in addition to the built in Azure security baseline for Data Box and Data Box service security measures currently supported by the service. Secure boot functionality with hardware root of trust and Trusted Platform Module (TPM) 2.0. Custom tamper-proof screws and built-in intrusion detection system to detect unauthorized device access. AES 256-bit BitLocker software encryption for data at rest is currently available. Hardware encryption via the RAID controller, which will be enabled by default on these devices, is coming soon. Furthermore, once available, customers can enable double encryption through both software and hardware encryption to meet their sensitive data transfer requirements. These ISTA 6A compliant devices are built to withstand rough conditions during shipment while keeping both the device and your data safe and intact. Learn more about the enhanced security features on Data Box 120 and Data Box 525. Broader Azure region coverage A recurring request from our customers has been wider regional availability of higher-capacity devices to accelerate large migrations. We’re happy to share that Azure Data Box 525 will be available across US, US Gov, EU, UK and Canada with broader presence in EMEA and APAC regions coming soon. This marks a significant improvement in the availability of a large-capacity device as compared to the current Data Box Heavy which is available only in the US and Europe. What our customers have to say For the last several months, we’ve been working directly with our customers of all industries and sizes to leverage the next generation devices for their data migration needs. Customers love the larger capacity with form-factor familiarity, seamless set up and faster copy. “We utilized Azure Data Box for a bulk migration of Unix archive data. The data, originating from IBM Spectrum Protect, underwent pre-processing before being transferred to Azure blobs via the NFS v4 protocol. This offline migration solution enabled us to efficiently manage our large-scale data transfer needs, ensuring a seamless transition to the Azure cloud. Azure Data Box proved to be an indispensable tool in handling our specialized migration scenario, offering a reliable and efficient method for data transfer.” – ST Microelectronics Backup & Storage team “This new offering brings significant advantages, particularly by simplifying our internal processes. With deployments ranging from hundreds of terabytes to even petabytes, we previously relied on multiple regular Data Box devices—or occasionally Data Box Heavy devices—which required extensive operational effort. The new solution offers sizes better aligned with our needs, allowing us to achieve optimal results with fewer logistical steps. Additionally, the latest generation is faster and provides more connectivity options at data centre premises, enhancing both efficiency and flexibility for large-scale data transfers.” - Lukasz Konarzewski, Senior Data Architect, Commvault “We have had a positive experience overall with the new Data Box devices to move our data to Azure Blob storage. The devices offer easy plug and play installation, detailed documentation especially for the security features and good data copy performance. We would definitely consider using it again for future large data migration projects.” – Bas Boeijink, Cloud Engineer, Eurofiber Cloud Infra Upcoming changes to older SKUs availability Note that in regions where the next-gen devices are available, new orders for Data Box 80 TB and Data Box Heavy devices cannot be placed post May 31, 2025. We will however continue to process and support all existing orders. Order your device today! The devices are currently available for customers to order in the US, Canada, EU, UK, and US Gov Azure regions. We will continue to expand to more regions in the upcoming months. Azure Data Box provides customers with one of the most cost-effective solutions for data migration, offering competitive pricing with the lowest cost per TB among offline data transfer solutions. You can learn more about the pricing across various regions by visiting our pricing page. You can use the Azure portal to select the requisite SKU suitable for your migration needs and place the order. Learn more about the all-new Data Box devices here. We are committed to continuing to deliver innovative solutions to lower the barrier for bringing data to Azure. Your feedback is important to us. Tell us what you think about the new Azure Data Box devices by writing to us at DataBoxPM@microsoft.com – we can’t wait to hear from you.841Views2likes0CommentsAzure Blob Storage SFTP: General Availability of ACLs (Access Control Lists) of local users
We are excited to announce the general availability of ACLs (Access Control Lists) for Azure Blob Storage SFTP local users. ACLs make it simple and intuitive for administrators to manage fine-grained access control to blobs and directories for Azure Blob Storage SFTP local users. Azure Blob Storage SFTP Azure Blob storage supports the SSH File Transfer Protocol (SFTP) natively. SFTP on Azure Blob Storage lets you securely connect to and interact with the contents of your storage account by using an SFTP client, allowing you to use SFTP for file access, file transfer, and file management. Learn more here. Azure Blob Storage SFTP is used by a significant number of our customers, who have shared overwhelmingly positive feedback. It eliminates the need for third-party or custom SFTP solutions involving cumbersome maintenance steps such as VM orchestration. Local users Azure Blob Storage SFTP utilizes a new form of identity management called local users. Local users must use either a password or a Secure Shell (SSH) private key credential for authentication. You can have a maximum of 25,000 local users for a storage account. Learn more about local users here. Access Control for local users There are two ways in which access control can be attained for local users. 1. Container permissions By using container permissions, you can choose which containers you want to grant access to and what level of access you want to provide (Read, Write, List, Delete, Create, Modify Ownership, and Modify Permissions). Those permissions apply to all directories and subdirectories in the container. Learn more here. 2. ACLs for local users What are ACLs? ACLs (Access Control Lists) let you grant "fine-grained" access, such as write access to a specific directory or file, which isn’t possible with Container Permissions. More fine-grained access control has been a popular ask amongst our customers, and we are very excited to make this possible now with ACLs. A common ACL use case is to restrict a user's access to a specific directory without letting that user access other directories within the same container. This can be repeated for multiple users so that they each have granular access to their own directory. Without ACLs, this would require a container per local user. ACLs also make it easier for administrators to manage access for multiple local users with the help of groups. Learn more about ACLs for local users here. How to set and modify the ACL of a file or a directory? You can set and modify the permission level of the owning user, owning group, and all other users of an ACL by using an SFTP client. You can also change the owning user or owning group of a blob or directory. These operations require 'Modify Permissions' and 'Modify Ownership' container permissions, respectively. Note: Owning users can now also modify the owning group and permissions of a blob or directory without container permissions. This is a new feature enhancement added during the General Availability phase of ACLs for local users. For any user that is not the owning user, container permissions are still required. Learn more here. These enhancements significantly improve the management and usability of Azure Blob Storage SFTP by providing more granular access control over the container model and extending customer options. Please reach out to blobsftp@microsoft.com for feedback about SFTP for Azure Blob Storage. We look forward to your continued support as we strive to deliver the best possible solutions for your needs.1.9KViews3likes3CommentsBuilding an AI-Powered ESG Consultant Using Azure AI Services: A Case Study
In today's corporate landscape, Environmental, Social, and Governance (ESG) compliance has become increasingly important for stakeholders. To address the challenges of analyzing vast amounts of ESG data efficiently, a comprehensive AI-powered solution called ESGai has been developed. This blog explores how Azure AI services were leveraged to create a sophisticated ESG consultant for publicly listed companies. https://youtu.be/5-oBdge6Q78?si=Vb9aHx79xk3VGYAh The Challenge: Making Sense of Complex ESG Data Organizations face significant challenges when analyzing ESG compliance data. Manual analysis is time-consuming, prone to errors, and difficult to scale. ESGai was designed to address these pain points by creating an AI-powered virtual consultant that provides detailed insights based on publicly available ESG data. Solution Architecture: The Three-Agent System ESGai implements a sophisticated three-agent architecture, all powered by Azure's AI capabilities: Manager Agent: Breaks down complex user queries into manageable sub-questions containing specific keywords that facilitate vector search retrieval. The system prompt includes generalized document headers from the vector database for context. Worker Agent: Processes the sub-questions generated by the Manager, connects to the vector database to retrieve relevant text chunks, and provides answers to the sub-questions. Results are stored in Cosmos DB for later use. Director Agent: Consolidates the answers from the Worker agent into a comprehensive final response tailored specifically to the user's original query. It's important to note that while conceptually there are three agents, the Worker is actually a single agent that gets called multiple times - once for each sub-question generated by the Manager. Current Implementation State The current MVP implementation has several limitations that are planned for expansion: Limited Company Coverage: The vector database currently stores data for only 2 companies, with 3 documents per company (Sustainability Report, XBRL, and BRSR). Single Model Deployment: Only one GPT-4o model is currently deployed to handle all agent functions. Basic Storage Structure: The Blob container has a simple structure with a single directory. While Azure Blob storage doesn't natively support hierarchical folders, the team plans to implement virtual folders in the future. Free Tier Limitations: Due to funding constraints, the AI Search service is using the free tier, which limits vector data storage to 50MB. Simplified Vector Database: The current index stores all 6 files (3 documents × 2 companies) in a single vector database without filtering capabilities or schema definition. Azure Services Powering ESGai The implementation of ESGai leverages multiple Azure services for a robust and scalable architecture: Azure AI Services: Provides pre-built APIs, SDKs, and services that incorporate AI capabilities without requiring extensive machine learning expertise. This includes access to 62 pre-trained models for chat completions through the AI Foundry portal. Azure OpenAI: Hosts the GPT-4o model for generating responses and the Ada embedding model for vectorization. The service combines OpenAI's advanced language models with Azure's security and enterprise features. Azure AI Foundry: Serves as an integrated platform for developing, deploying, and governing generative AI applications. It offers a centralized management centre that consolidates subscription information, connected resources, access privileges, and usage quotas. Azure AI Search (formerly Cognitive Search): Provides both full-text and vector search capabilities using the OpenAI ada-002 embedding model for vectorization. It's configured with hybrid search algorithms (BM25 RRF) for optimal chunk ranking. Azure Storage Services: Utilizes Blob Storage for storing PDFs, Business Responsibility Sustainability Reports (BRSRs), and other essential documents. It integrates seamlessly with AI Search using indexers to track database changes. Cosmos DB: Employs MongoDB APIs within Cosmos DB as a NoSQL database for storing chat history between agents and users. Azure App Services: Hosts the web application using a B3-tier plan optimized for cost efficiency, with GitHub Actions integrated for continuous deployment. Project Evolution: From Concept to Deployment The development of ESGai followed a structured approach through several phases: Phase 1: Data Cleaning Extracted specific KPIs from XML/XBRL datasets and BRSR reports containing ESG data for 1,000 listed companies Cleaned and standardized data to ensure consistency and accuracy Phase 2: RAG Framework Development Implemented Retrieval-Augmented Generation (RAG) to enhance responses by dynamically fetching relevant information Created a workflow that includes query processing, data retrieval, and response generation Phase 3: Initial Deployment Deployed models locally using Docker and n8n automation tools for testing Identified the need for more scalable web services Phase 4: Transition to Azure Services Migrated automation workflows from n8n to Azure AI Foundry services Leveraged Azure's comprehensive suite of AI services, storage solutions, and app hosting capabilities Technical Implementation Details Model Configurations: The GPT model is configured with: Model version: 2024-11-20 Temperature: 0.7 Max Response Token: 800 Past Messages: 10 Top-p: 0.95 Frequency/Presence Penalties: 0 The embedding model uses OpenAI-text-embedding-Ada-002 with 1536 dimensions and hybrid semantic search (BM25 RRF) algorithms. Cost Analysis and Efficiency A detailed cost breakdown per user query reveals: App Server: $390-400 AI Search: $5 per query RAG Query Processing: $4.76 per query Agent-specific costs: Manager: $0.05 (30 input tokens, 210 output tokens) Worker: $3.71 (1500 input tokens, 1500 output tokens) Director: $1.00 (600 input tokens, 600 output tokens) Challenges and Solutions The team faced several challenges during implementation: Quota Limitations: Initial deployments encountered token quota restrictions, which were resolved through Azure support requests (typically granted within 24 hours). Cost Optimization: High costs associated with vectorization required careful monitoring. The team addressed this by shutting down unused services and deploying on services with free tiers. Integration Issues: GitHub Actions raised errors during deployment, which were resolved using GitHub's App Service Build Service. Azure UI Complexity: The team noted that Azure AI service naming conventions were sometimes confusing, as the same name is used for both parent and child resources. Free Tier Constraints: The AI Search service's free tier limitation of 50MB for vector data storage restricts the amount of company information that can be included in the current implementation. Future Roadmap The current implementation is an MVP with several areas for expansion: Expand the database to include more publicly available sustainability reports beyond the current two companies Optimize token usage by refining query handling processes Research alternative embedding models to reduce costs while maintaining accuracy Implement a more structured storage system with virtual folders in Blob storage Upgrade from the free tier of AI Search to support larger data volumes Develop a proper schema for the vector database to enable filtering and more targeted searches Scale to multiple GPT model deployments for improved performance and redundancy Conclusion ESGai demonstrates how advanced AI techniques like Retrieval-Augmented Generation can transform data-intensive domains such as ESG consulting. By leveraging Azure's comprehensive suite of AI services alongside a robust agent-based architecture, this solution provides users with actionable insights while maintaining scalability and cost efficiency. https://youtu.be/5-oBdge6Q78?si=Vb9aHx79xk3VGYAh198Views0likes0Comments