managed disks
25 TopicsTake Data Management to the next level with Silk Software-Defined Azure Storage
Note: This article is co-authored by our partner Silk. In today’s data-driven world, every enterprise is under pressure to make smarter decisions faster. Whether you're running production databases, training machine learning models, running advanced analytics, or building customer-facing applications, your data needs to be more agile, secure, and readily accessible than ever before. That’s where Silk’s software-defined cloud storage platform on Microsoft Azure comes into play — bringing performance, resiliency, and intelligence to data management across the cloud. With the recent addition of Silk Echo, you can now supercharge your Copy Data Management (CDM) strategy to ensure your data isn’t just protected, it’s available instantly for any purpose — a true strategic asset. Transforming Azure IaaS with Silk's Platform Microsoft Azure offers a rich ecosystem of services to support every stage of your cloud journey, and when paired with Silk, customers gain a game-changing storage and data-services layer purpose-built for performance-intensive workloads. Silk’s software-defined storage platform runs on Azure infrastructure as a high-performance data layer between Azure compute and native storage. It works by orchestrating redundant sets of resources, as close to the DB compute as possible. Aggregating and accelerating the native capabilities of Azure, enabling databases to reach the maximum physical limits of the underlying hardware. Silk makes use of the excellent L-series of storage optimized VMs and NVME media, ensuring the data is always quickly available and able to withstand multiple failures using erasure coding. Silk is designed to address common challenges customers face when migrating relational databases — such as SQL Server, Oracle, and DB2 — to the cloud: Performance Bottlenecks: On-prem workloads often rely on ultra-low latency, high-throughput storage systems with features that are difficult to replicate in the cloud. Data Copy Sprawl: Multiple non-production environments (dev, test, QA, analytics) mean many redundant data copies, leading to storage inefficiencies. Operational Overhead: Managing snapshots, backups, and refreshes across environments consumes time and resources. Silk changes the game with: Extreme performance in the Azure cloud. Up to 34GB/s of throughput from a single VM. The combination of Silk and Azure provide a unique cost/performance balance, through the combination of a very low latency software defined cloud storage platform and sharing of Azure resources. Inline data deduplication and compression for optimized resource utilization. Autonomous, fully integrated, non-disruptive, zero cost snapshots and clones for effortless environment refreshes. Multi-zone resilience and no single point of failure. This makes it easier than ever to lift and shift critical applications to Azure, with the confidence of consistent performance, uptime, and flexibility. Elevating CDM with Silk Echo for AI While Silk’s core platform solves performance and efficiency challenges, Silk Echo for AI introduces an intelligent, AI-powered layer that revolutionizes how organizations manage and leverage their data across the enterprise. At its core, Silk Echo for AI offers next generation Copy Data Management capabilities that empower IT teams to accelerate digital initiatives, reduce costs, and maximize the value of every data copy. Key Benefits of Silk Echo for AI Smarter Data Copying and Cloning Silk Echo leverages AI to understand data access patterns and recommends the optimal strategy for creating and managing data copies. Instead of manually managing snapshots, you can automate the entire workflow — ensuring the right data is in the right place at the right time. Instant, Space-Efficient Clones Using Silk’s advanced snapshot and cloning engine, Echo creates fully functional clones in seconds, consuming minimal additional storage resources. Teams can spin up dev/test environments instantly, accelerating release cycles and experimentation. Cross-Environment Data Consistency Echo ensures consistency across copies — whether you're cloning for testing, backup, or analytics — and with AI-driven monitoring, it can detect drift between environments and recommend synchronizations. Policy-Based Lifecycle Management Define policies for how long data copies should live, when to refresh them, and who has access. Echo automates the enforcement of these policies, reducing human error and ensuring compliance. Optimized Resource Consumption Silk Echo minimizes redundant data storage through smart deduplication, compression, and AI-driven provisioning — resulting in cost savings of 50% or more across large-scale environments. Enablement for AI/ML Workflows For data science teams, Silk Echo provides curated, up-to-date data clones without impacting production environments — essential for model training, experimentation, and validation. Real-World Use Case: Streamlining Dev/Test and AI Pipelines Consider Sentara Health, a healthcare provider migrating their EHR and SQL Server workloads to Azure. Before Silk, environment refreshes were time-consuming, often taking days or even weeks. With Silk Echo for AI, the same tasks will be completed in minutes. Now, development teams have self-service access to fresh clones of production data — enabling faster iteration and better testing outcomes. Meanwhile, their data science team leverages Echo’s snapshot automation to feed AI models with real-time, production-grade data clones without risking downtime or data corruption. All of this runs seamlessly on Azure, with Silk ensuring high performance and resilience at every step. Joint Value of Silk and Microsoft Together, Silk and Microsoft are unlocking a new level of agility and intelligence for enterprise data management: Data-as-a-Service: Give every team — DevOps, DataOps, AI/ML — access to the data they need, when they need it. Free snapshots democratize data so up-to-date copies can be made quickly for any team member who can benefit from it. AI-Ready Database Infrastructure: Your infrastructure evolves from a reactive model which is addressing problems as they arise (i.e. triggering responses on alerts), to a predictive model that utilizes AI/ML to forecast issues and mitigate them before they occur by learning patterns of behavior. Silk enables real-time AI inferencing, for business-critical agents that require access to up-to-date operational data. Reduced Costs, Improved ROI: Storage optimization, reduced manual overhead, and faster time to value — backed by Azure’s scalability and Silk’s performance. Accelerated Cloud Migrations: Achieve the enhanced scalability and flexibility of a cloud migration for your Tier 1 databases without refactoring. Get Started Ready to take your data management strategy to the next level? Explore how Silk’s software-defined storage and Silk Echo for AI can accelerate your transformation on Microsoft Azure. Whether you're modernizing legacy systems, building AI-driven applications, or simply trying to get more value from your cloud investments, Silk and Microsoft are here to help. By embracing the power of Silk’s software-defined storage and Silk Echo, organizations can finally make their data in the cloud work smarter, not harder. Contact Alliances@silk.us for a deeper dive on Silk!167Views2likes0CommentsAzure Native Pure Storage Cloud brings the best of Pure and Azure to our customers
Pure Storage Cloud is the result of a tightly coupled integration effort between the Pure and Azure teams that brings Pure’s industry-leading advanced data services to our customers. Built on rock solid Azure infrastructure, Pure makes Azure even better!333Views0likes0CommentsAnnouncing General Availability of Next generation Azure Data Box Devices
Today, we’re excited to announce the General Availability of Azure Data Box 120 and Azure Data Box 525, our next-generation compact, NVMe-based Data Box devices. These devices are currently available for customers to order in the US, US Gov, Canada, EU and the UK Azure regions, with broader availability coming soon. Since the preview announcement at Ignite '24, we have successfully ingested petabytes of data, encompassing multiple orders serving customers across various industry verticals. Customers have expressed delight over the reliability and efficiency of the new devices with up to 10x improvement in data transfer rates, highlighting them as a valuable and essential asset for large-scale data migration projects. These new device offerings reflect insights gained from working with our customers over the years and understanding their evolving data transfer needs. They incorporate several improvements to accelerate offline data transfers to Azure, including: Fast copy - Built with NVMe drives for high-speed transfers and improved reliability and support for faster network connections Ease of use - larger capacity offering (525 TB) in a compact form-factor for easy handling Resilient - Ruggedized devices built to withstand rough conditions during transport Secure - Enhanced physical, hardware and software security features Broader availability – Presence planned in more Azure regions, meeting local compliance standards and regulations What’s new? Improved Speed & Efficiency NVMe-based devices offer faster data transfer rates, providing a 10x improvement in data transfer speeds to the device as compared to previous generation devices. With a dataset comprised of mostly large (TB-sized) files, on average half a petabyte can be copied to the device in under two days. High-speed transfers to Azure with data upload up to 5x faster for medium to large files, reducing the lead time for your data to become accessible in the Azure cloud. Improved networking with support for up to 100 GbE connections, as compared to 10 GbE on the older generation of devices. Two options with usable capacity of 120 TB and 525 TB in a compact form factor meeting OSHA requirements. Devices ship the next day air in most regions. Learn more about the performance improvements on Data Box 120 and Data Box 525. Enhanced Security The new devices come with several new physical, hardware and software security enhancements. This is in addition to the built in Azure security baseline for Data Box and Data Box service security measures currently supported by the service. Secure boot functionality with hardware root of trust and Trusted Platform Module (TPM) 2.0. Custom tamper-proof screws and built-in intrusion detection system to detect unauthorized device access. AES 256-bit BitLocker software encryption for data at rest is currently available. Hardware encryption via the RAID controller, which will be enabled by default on these devices, is coming soon. Furthermore, once available, customers can enable double encryption through both software and hardware encryption to meet their sensitive data transfer requirements. These ISTA 6A compliant devices are built to withstand rough conditions during shipment while keeping both the device and your data safe and intact. Learn more about the enhanced security features on Data Box 120 and Data Box 525. Broader Azure region coverage A recurring request from our customers has been wider regional availability of higher-capacity devices to accelerate large migrations. We’re happy to share that Azure Data Box 525 will be available across US, US Gov, EU, UK and Canada with broader presence in EMEA and APAC regions coming soon. This marks a significant improvement in the availability of a large-capacity device as compared to the current Data Box Heavy which is available only in the US and Europe. What our customers have to say For the last several months, we’ve been working directly with our customers of all industries and sizes to leverage the next generation devices for their data migration needs. Customers love the larger capacity with form-factor familiarity, seamless set up and faster copy. “We utilized Azure Data Box for a bulk migration of Unix archive data. The data, originating from IBM Spectrum Protect, underwent pre-processing before being transferred to Azure blobs via the NFS v4 protocol. This offline migration solution enabled us to efficiently manage our large-scale data transfer needs, ensuring a seamless transition to the Azure cloud. Azure Data Box proved to be an indispensable tool in handling our specialized migration scenario, offering a reliable and efficient method for data transfer.” – ST Microelectronics Backup & Storage team “This new offering brings significant advantages, particularly by simplifying our internal processes. With deployments ranging from hundreds of terabytes to even petabytes, we previously relied on multiple regular Data Box devices—or occasionally Data Box Heavy devices—which required extensive operational effort. The new solution offers sizes better aligned with our needs, allowing us to achieve optimal results with fewer logistical steps. Additionally, the latest generation is faster and provides more connectivity options at data centre premises, enhancing both efficiency and flexibility for large-scale data transfers.” - Lukasz Konarzewski, Senior Data Architect, Commvault “We have had a positive experience overall with the new Data Box devices to move our data to Azure Blob storage. The devices offer easy plug and play installation, detailed documentation especially for the security features and good data copy performance. We would definitely consider using it again for future large data migration projects.” – Bas Boeijink, Cloud Engineer, Eurofiber Cloud Infra Upcoming changes to older SKUs availability Note that in regions where the next-gen devices are available, new orders for Data Box 80 TB and Data Box Heavy devices cannot be placed post May 31, 2025. We will however continue to process and support all existing orders. Order your device today! The devices are currently available for customers to order in the US, Canada, EU, UK, and US Gov Azure regions. We will continue to expand to more regions in the upcoming months. Azure Data Box provides customers with one of the most cost-effective solutions for data migration, offering competitive pricing with the lowest cost per TB among offline data transfer solutions. You can learn more about the pricing across various regions by visiting our pricing page. You can use the Azure portal to select the requisite SKU suitable for your migration needs and place the order. Learn more about the all-new Data Box devices here. We are committed to continuing to deliver innovative solutions to lower the barrier for bringing data to Azure. Your feedback is important to us. Tell us what you think about the new Azure Data Box devices by writing to us at DataBoxPM@microsoft.com – we can’t wait to hear from you.979Views2likes0CommentsIntroducing new Storage Capabilities to Copilot in Azure (Preview)
We are excited to announce that customers can now take advantage of new Copilot in Azure (Public Preview) capabilities for Storage services. Copilot in Azure is an intelligent assistant designed to help you design, operate, optimize and troubleshoot your Azure environment. With the new Storage capabilities, Copilot in Azure can analyze your storage services metadata and logs to streamline and enhance tasks such as building cloud solutions, managing, operating, supporting, and troubleshooting cloud applications in Azure Storage. Troubleshooting Disk Performance with Copilot in Azure Available now in Public Preview Azure offers a rich variety of Disk metrics, providing insights into the performance of your Virtual Machine (VM) and Disk. These metrics help you diagnose performance issues when your application requires higher performance than what you have configured for the VM and Disks. Whether you are looking to set up and validate a new environment in Azure, or are facing issues with your existing set-up, Copilot further enhances your experience by analyzing these metrics to troubleshoot performance issues on-behalf of you, along with providing guided recommendations for optimizing the VM and disks performance to improve the experience for your application. To troubleshoot performance issues with Copilot in Azure, navigate to Copilot in the Azure Portal and enter a prompt related to VM-Disk performance, such as “Why is my Disk slow?”. Copilot will then ask you to specify the VM and Disk(s) experiencing performance issues, along with the relevant time period. Using this information, you can use Copilot to analyze your current VM-Disk configuration and performance metrics to identify whether your application is experiencing slowness due to reaching the configured performance limits for the VM or Disk. It will then provide a summary of the analysis and a set of recommended actions to resolve your performance issue, which you can apply directly in the Portal through Copilot’s guided recommendations. By leveraging the power of Copilot, you can efficiently diagnose and address performance issues within your Azure Disks environment. For more information on the Disk Performance Copilot capability, refer to the Public Documentation. Managing Storage Lifecycle Management with Copilot in Azure Available now in Public Preview With Copilot in Azure, we're providing a more efficient way to manage and optimize your storage costs. Copilot in Azure allows you to save on costs by tiering blobs that haven't been accessed or modified for a while. In some cases, you might even decide to delete those blobs. With Copilot in Azure, you can simply automate lifecycle management (LCM) rule authoring, enabling you to perform bulk actions on Storage accounts through a natural language interface. This means no more manual rule creation or risk of misconfiguration! To use this capability, simply enter a prompt related to cost management, such as “Help me reduce my storage account costs” or “I want to lower my storage costs.” Copilot will then guide you through authoring an LCM rule to help you achieve your goals. For more information on the Storage Lifecycle Management Copilot capability, refer to the Public Documentation. Troubleshoot File Sync errors with Copilot in Azure Available now in Public Preview With Copilot in Azure, you can now quickly troubleshoot and resolve common Azure File Sync issues, such as network misconfiguration, incorrect RBAC permissions, or accidental file share deletions. Copilot in Azure detects errors and misconfigurations, provides exact steps to fix them, and can act on your behalf to resolve common errors. To use this capability, simply enter a prompt related to File Sync such as “Why are my files not syncing?” or “Help me troubleshoot error 0x80C83096.” With the File Sync errors troubleshooting skill, Copilot in Azure acts as your intelligent assistant to keep your File Sync environment running smoothly. This capability not only saves you time by cutting down on troubleshooting efforts but also empowers you to resolve issues confidently and independently. For more information on the Files Sync Copilot capability, refer to the Public Documentation.1.2KViews2likes0CommentsProtect Azure workloads with VM level consistency using Agentless Crash-Consistent Restore Points!
Today we are happy to announce public preview support for multi disk crash consistency mode in Virtual Machine (VM) restore points. A crash consistent VM restore point is an agentless solution that stores the VM configuration and point-in-time write-order consistent snapshots for all managed disks attached to a VM. This is same as the status of data in the VM after a power outage or a crash. VM restore points, announced in July’22, enabled reliable restoration of disks and VMs for data loss, corruption, disaster recovery, and infrastructure maintenance incidents. Using VM restore points, Azure Backup and ISV partners such as Commvault and Veritas offer BCDR solutions for customers. VM restore points are incremental, where the first VM restore point stores a full copy of your data. For each successive restore point of the VM, an incremental copy i.e., only the changes to your disks are stored. The incremental design enables you to benefit from the data protection of frequent backups while minimizing storage costs. You can also use the built-in copy functionality to copy your VM restore points to any region of your choice to get protection from regional failures. Key Benefits of Crash Consistent Restore Points Agentless solution Using agents for VM restore points is considered as a security, compliance, and management overhead by some partners/customers. Crash consistent restore points directly takes the multi-disk consistent snapshots from the host machine thereby removing the overhead of an agent inside the VM. OS agnostic support As an agentless solution, there is no dependency on the guest operating system (OS). All Windows and Linux OS types are supported by crash consistent restore points. Erstwhile unsupported Linux OS versions, 32-bit OS systems, Windows VMs with ARM64 etc. with application consistency mode are now supported with crash consistency mode. High Frequency support Crash consistent restore points support 1 hour frequency enabling lower RPO for applications running on Azure VMs. VM Level Consistency Prior to VM restore points, customers/partners had to use managed disk snapshots which are taken at per disk level. Due to this consistency at VM level could not be guaranteed and it was also cumbersome to manage. Resiliency solutions with crash consistent VM restore points Azure Backup: Providing first class backup support using VM restore points Azure Virtual Machine Backup enables you to create an enhanced policy to take multiple snapshots a day. This allows you to protect your virtual machines with Recovery Point Objective (RPO) as low as 4 hours. Azure Backup now supports crash consistent restore points (in private preview). Please enroll here to use the capability. “Azure Backup will enable customers to protect a wider set of Virtual Machines (VM) running Linux distributions that are not on the current support matrix as well as VMs that do not use Azure extensions using crash consistent restore points.” - Aravindan Gopalakrishnan, PRINCIPAL PDM MANAGER – Microsoft, Azure Backup Zerto – An HPE Company: Delivering Consistency efficiently with crash-consistent VM restore points Zerto an enterprise-class business continuity and disaster recovery company, is one of the first ISV partners to integrate the new crash-consistent snapshot capability into their product. This integration will enable whole VM protection with crash-consistent snapshots across multiple volumes. “Multi-volume consistency protection is one of the most sought after features by Azure Customers”. - Shannon Snowden, Senior Product Manager - Zerto, an HPE Company “Multi-volume virtual machine (VM) level crash consistency is critical in disaster recovery protection. With the new Azure crash consistent snapshot capability, it enables Zerto to create VM level crash-consistent restore points using underlying snapshots.” - Sandra Biton, Engineering Group Manager – Zerto, an HPE company Zerto 10 introduces multidisk consistency for Azure VMs, which protects VMs to, from, and within Azure with complete disk consistency. Moving away from snapshot-based replication, multidisk consistency for Azure VMs now leverages a new restore point API, offering an easier and more efficient way to manage replication and recovery operations. Get Started Click here to enrol in public preview of crash consistent VM restore points, Learn more about VM restore points. Please share your feedback or questions in the comments section below.9.4KViews2likes6Comments