SCVMM R2 RC Features Since releasing VMM R2 Beta, we’ve been working on an update to R2 and we’re readying a RC version that will ship in early June (VMM TAP customers will get an early preview later this month). In this context, I just wanted to give a headsup on some new features that we’ve added since Beta that’ll show up in the RC release. Most of these features are a result of feedback we got from customers and partners, we do listen to your feedback, so please keep it coming. Without further ado, here are the features:
We’ve heard from customers as well as field for the need for migrating storage of a running VM. This is especially relevant as customers migrate from their existing one VM per LUN deployments and consolidate their VMs into a single CSV (clustered shared volume) LUN when they upgrade to Windows 2008 Server R2.
With VMM R2, we’ve added the capability to do what we call “Quick Storage Migration”. This feature enables migration of a VM’s storage both within the same host and across hosts while the VM is running with a minimum of downtime. The downtime depends on the amount of activity going on in the VM at the time of migration, our tests have shown typical downtimes to be less than 2 mins.
We’ve also added the capability to do VMWare storage vMotion which allows the storage of a VM to be transferred while the VM remains on the same host with no downtime.
Queuing of Live migrations
While live migration is the much awaited new feature in Windows 2008 Server R2, it does come with a limitation in that a host can participate in only one live migration at any given time, both as source and destination. This means that the user has to wait for the live migration to complete before attempting another one.
In VMM R2, we’ve added the capability to detect the condition where live migration fails due to another live migration in progress and queue up the request in the background and retry the request after a period of time. The retry intervals are exponentially backed off to avoid overloading the system and the retries are capped to a max time period (15 mins). This feature enables users to do multiple live migrations without needing to keep track of other live migrations that are happening within the cluster and VMM R2 will automatically do the queuing and retries in the background.
This feature is again in response to customers and field requests. In VMM 2008, the only way to deploy a new VM is to copy the VHD from the library to the host over the network using BITS. Depending on the size of VHD and the available bandwidth, this could take several minutes or even hours. We heard from a lot of customers that they have sophisticated SAN technologies that enables them to clone a LUN which contains the VHD and present it to the host. But they still want to use VMM’s template so the OS customization and IC installation can be done. So they basically wanted new-VM without the network copy which is exactly what we did in R2. You can now create a template which includes the OS answer file and which references a dummy VHD which is not used. Then, using Powershell (we didn’t have enough time to add UI support, so this feature is cmdline only. The power users that would use this feature would most likely use scripting to mass deploy VMS anyway) you can do a new-VM and specify the path to the VHD using a new switch –UseLocalVirtualHardDisk. Here’s a sample script:
VM migration requires host hardware to be compatible. This includes things like CPU features, enlightenment parity etc. In VMM R2, we’ve added deep check for compatibility using Hyper-V and VMWare compat check APIs. This enables users to check if VM is compatible with the destination host instead of doing the migration and then finding out that the VM cannot start on the host.
A related feature is to make VM compatible; it’s a setting per VM that turns off certain CPU features in VM so it becomes compatible with the host. This is a tradeoff between using advanced CPU features of the host versus making VM more compatible for migration. This setting requires that the VM be restarted to take effect.
Support for 3rd party CFS
There are companies that build clustered file systems that functionally is similar to CSV in that it enables multiple hosts to have shared access to a disk resource. In VMM R2, we support such file systems by detecting it’s a CFS disk and allow for deploying multiple VMs per LUN. This enables customers who have deployed such file systems (Melo from Sanbolic is one that we’ve tested with) to take advantage of this new capability.
Support for Veritas Volume Manager
We’ve also added support for Veritas volume manager which enables VMM R2 to recognize Veritas volume manager disk as a cluster disk resource.
As you can see, there’s a ton of new stuff that’s coming in RC and this list doesn’t include all those features that we already shipped in Beta.