What’s New in 2012 R2: IaaS Innovations
Published Sep 07 2018 10:09 PM 754 Views
Iron Contributor
First published on CloudBlogs on Jul, 31 2013

Part 5 of a 9-part series . Today’s post has two sections; the second half is here .

One of the industry metrics that I follow closely every quarter is the sale of x86 servers around the world. I look at the trends around the purchase of the server hardware (what is the growth rate, where are they being purchased, etc.) by country, by segment, and at least 10 other benchmarks that apply to me and the Windows Server and System Center business. Sorting through this data is key, so believe me when I say that I am an Excel expert – and I love the new self-service BI that has been built by the Excel and SQL teams!

Looking at where the servers have been purchased, where the highest levels of growth are occurring, and now with organizations looking at how they move to a Service Provider model – it is obvious that we are seeing the rise of the Service Provider and public cloud .

The move to a service provider model is one of the most significant shifts we are seeing in data centers around the world. This is occurring as two key developments are afoot: First, many organizations are making moves to use Service Provider and public cloud capacity; and Second, there is an internal shift within organizations towards a model wherein they are provide Infrastructure-as-a-Service (IaaS) for their internal business units. This is all headed towards a model where enterprises have detailed reporting on the usage of that infrastructure, if not bill-back for the actual usage.

Back during the planning phase of 2012 R2, we carefully considered where to focus our investments for this release wave, and we chose to concentrate our efforts on enabling Service Providers to build out a highly-available, highly-scalable IaaS infrastructure on cost-effective hardware. With the innovations we have driven in storage, networking, and compute, we believe Service Providers can now build-out an IaaS platform that enables them to deliver VMs at 50% of the cost of competitors . I repeat: 50% . The bulk of the savings comes from our storage innovations and the low costs of our licenses.

You might be asking yourself, “Why is Microsoft focused on Service Providers?  Isn’t that what all of us really are?  Isn’t the job that most of us have in building out infrastructure (whether that be in an internal private cloud, a service provider cloud, or a public cloud) all about delivering the infrastructure our ‘customers’ need to host the applications and services that run the organization?  And shouldn’t we be doing it in a way that offers the required SLA while relentlessly diving down the associated costs?” Even if you haven’t recently asked yourself a question this long, when you read today’s post (and tomorrow’s) think of yourself and your organization, and consider whether or not you think we are all Service Providers.

At the core of our investments in 2012 R2 is the belief that customers are going to be using multiple clouds, and they want those clouds to be consistent.

Consistency across clouds is key to enabling the flexibility and frictionless movement of applications across these clouds, and, if this consistency exists, applications can be developed once and then hosted in any clouds. This means consistency for the developer . If clouds are consistent with the same management and operations tools easily used to operate these applications, that means consistency for the IT Pro .

It really all comes down to the friction-free movement of applications and VMs across clouds . Microsoft is very unique in this regard; we are the only cloud vendor investing and innovating in public, private and hosted clouds – with a promise of consistency (and no lock-in!) across all of them.

We are taking what we learn from our innovations in Windows Azure and delivering them through Windows Server, System Center and the Windows Azure Pack for you to use in your data center. This enables us to do rapid innovation in the public cloud, battle harden the innovations, and then deliver them to you to deploy. This is one of the ways in which we have been able to quicken our cadence and deliver the kind of value you see in these R2 releases. You’ll be able to see a number of areas where we are driving consistency across clouds in today’s post.

And speaking of today’s post – this IaaS topic will be published in two parts , with the second half appearing tomorrow morning.

In this first half of our two-part overview of the 2012 R2’s IaaS capabilities, Erin Chapple , a Partner Group Program Manager in the Windows Server & System Center team, examines the amazing infrastructure innovations delivered by Windows Server 2012 R2, System Center 2012 R2, and the new features in the Windows Azure Pack.

As always in this series , check out the “ Next Steps ” at the bottom of this post for links to wide range of engineering content with deep, technical overviews of the concepts examined in this post. Also , if you haven’t started your own evaluation of the 2012 R2 previews, visit the TechNet Evaluation Center and take a test drive today!

* * *

It’s hard to believe that only one year ago we were celebrating the release of Windows Server 2012, and System Center 2012 SP1 was in beta.

Both of these releases delivered the most innovation to date in a single server operating system release, and the question on everyone’s mind was, “ What’s next? ” There was another question, too: “ What could we deliver in a year?

The good news is that, as engineers, we love a challenge, and taking software projects like Windows Server and System Center and delivering a compelling set of end-to-end scenarios in just a year was a challenge we welcomed!

One of the things we noticed when we examined Windows Server 2012 and System Center 2012 SP1 was that, despite the great innovation, customers still had to stitch multiple components together in order to build an Infrastructure-as-a-Service (IaaS) offering.

IaaS was a critical area where we could make a difference for our customers , and it was an area of increasing importance to our enterprise customers and their need to internally operate in more of a Service Provider capacity (this capability includes the need to consolidate data center resources and offer virtual machine (VM) rental to departments in order to see not only an infrastructure spend benefit, but also a process to cut operating costs). Additionally, the number of Service Providers moving from more traditional dedicated hosting models into the IaaS market is growing. This focus on delivering IaaS solutions for Service Providers (and enterprises that want to act as Service Providers for their users) became a rallying point across the Windows Server, System Center, and Windows Azure Pack (WAP) teams. As David Cross explained in his intro to this R2 series , we shifted our focus from features and components to delivering smooth end-to-end scenarios .

To deliver this value in the Windows Server 2012 R2 release, we focused on two important aspects of delivering IaaS :

  1. Continuing to drive innovation into the infrastructure itself to ensure network, compute, and storage are not only low-cost, but also easy to operate through rich integration with System Center.
  2. Delivering a delightful experience for the IaaS and tenant administrators using IaaS.

To be clear, by “tenant” we are referring to the customer of the Service Provider who is acquiring and using resources offered by the Service Provider.

The remainder of this blog post will analyze the specific scenarios we outlined in our planning process, and we’ll also detail how we worked through the integration and technical challenges to deliver these scenarios for the Windows Server 2012 R2 preview software that is currently available . As mentioned above, these investments are targeted specifically at Service Providers and enterprises that want to act as Service Providers for their users. For simplicity, we will refer to this collection of customers simply as Service Providers.

Low-Cost, Easy-to-Operate Infrastructure

In the cloud-first world, infrastructure plays an increasingly important role in the modern data center. Innovation at the infrastructure layer enables Service Providers to deliver higher levels of performance and availability as well as richer services – while remaining cost effective. Additionally, the focus on separating the infrastructure from the workloads and applications makes it easier to adopt new innovation and stay up to date. With a little upfront planning and design, the infrastructure can move forward at a faster pace than the workloads that run on it.

When we think about the infrastructure, we focus on three components :

  • Network
  • Compute
  • Storage


With Windows Server 2012 and System Center Virtual Machine Manager (SCVMM), we enabled new foundational scenarios from Continuous Availability (with in-box NIC Teaming ) to building blocks for Software Defined Networking (with the extensible Hyper-V Virtual Switch and Hyper-V Network Virtualization [HNV]) to IP Address Management (IPAM). We designed these scenarios from end-to-end, to provide the infrastructure needed to enable them in Windows Server, and the management and automation of these scenarios through SCVMM.

If you’d like to see this for yourself, I recommend taking a couple minutes and downloading the System Center 2012 R2 and Windows Server 2012 R2 previews .

As we looked to build on this foundation, we spent time with customers (from enterprises, to Service Providers, to Microsoft’s own data center groups such as Azure and Bing) hoping to understand how they operate and what they needed to run their networks. As a result of this research, three main pieces of feedback emerged:

  1. Reduce networking Capital Expenditures (CapEx).
    Achieving reduced CapEx is not just about low cost networking equipment. It is equally important to enable customers to maximize utilization of their existing resources (in other words, make what they have work even better) and reduce the need for specialized hardware over time.
  2. Choice and flexibility matter.
    Customers want choice and flexibility in both the networking vendors they use and the cloud in which their workloads run. Service Providers and enterprises running their own infrastructure want the ability to plug and play between vendors without having to change tools, processes, or knowledge. Enterprises using either the cloud or hosted environments want the ability to deploy any workloads across any cloud.
  3. Network automation is the key.
    The most frequent piece of feedback we received was that network automation is key in making the network more flexible and easier to operate. For example, Azure applies thousands of changes in the network on a regular basis, and more than 50,000 network changes every day! Without automation, this would be impossible. Automation is required across the Service Provider’s network, the tenant’s network, and network infrastructure services (load balancing, firewall, etc.).

These learnings helped crystallize our core customer vision for networking in the R2 release:

Enable existing networks to become a pooled, automated resource with flexibility to move workloads across any cloud and to enable such agility alongside high performance and easy diagnosability.

This vision defined the scenario areas we targeted: Cloud Scale Performance and Diagnosability, Comprehensive Software-Defined-Networking (SDN) Solution, and Network Infrastructure Enhancements for the cloud. As shown in Figure 1 (see below), there are many features that further enhance our end-to-end capabilities in these areas.

Figure 1: Networking enhancements in the R2 wave

Scenario 1: Cloud Scale Performance and Diagnosability

Driving up performance enables customers to do more with their existing investments , instead of having to rely on new or specialized hardware to meet their scale needs. This in turn drives down capital expenses . With the Windows Server 2012 R2 release, Virtual RSS (vRSS) enables traffic intensive workloads to scale up to optimally utilize high speed NICs (10G) even from a VM. In addition, in order to improve diagnostics and thereby drive down operational expenses, we worked closely with Azure to design the ability to remotely collect packet captures and ETW traces both as a live feed (with Microsoft Message Analyzer ) and as an .etl file. This enables diagnostics without a need to log on to the console of the target computer (Host or VM). The resulting model helps get at the needed data faster to isolate root cause issues, and it enables more flexibility for remotely administered computers.

Scenario 2: Comprehensive SDN Solution

Today’s Virtual LAN (VLAN) networks tend to be inflexible and they require high touch whenever network changes are required. In multi-tenanted service provider environments, this setup reduces agility across a number of scenarios, such as onboarding new tenants, live migrating VMs, and applying new policies. With the 2012 R2 release, we advanced both of the key building blocks – Hyper-V Virtual Switch and Hyper-V Network Virtualization . We have also filled a key in-box gap around providing Windows Server gateways to enable customers to flexibly span their workloads across multiple clouds. We also enabled basic management of physical switches (with standards-based schemas) using Windows PowerShell and SCVMM, thereby making it possible to automate certain diagnostics across the hosts and into the physical network. Finally, deployment and management is entirely automated via SCVMM, with tenant self-service via Windows Azure Pack.

Enhanced Network Virtualization

In Windows Server 2012, we shipped the first release of our network virtualization solution. It was highly scalable, standards-based and enabled scenarios such as:

  • Multiple virtual networks on the same physical network.
  • Live migration of VMs across layer 2 networks.
  • Bring your own network topology (including your own IP addresses).

The components that comprise our network virtualization solution include four important elements:

  1. Windows Azure Pack
    This provides a tenant-facing portal for creating virtual networks.
  2. System Center Virtual Machine Manager (SCVMM)
    This provides a centralized management of the virtual networks.
  3. Hyper-V Network Virtualization
    This supports the data plane needed to virtualize network traffic.
  4. Third party gateways (including F5, Huawei, and Iron Networks)
    These provide the connection between the virtual and physical networks.

Though we have great partners (including F5, Huawei, and Iron Networks) shipping HNV gateways, we heard feedback from customers that we needed to ship an in-box gateway that reduced the amount of computing resources needed to perform these gateway capabilities, met the high availability needs for virtual networks, and was fully manageable by SCVMM. Addressing this customer feedback was one of our major focuses for R2.

In Windows Server, we added a multi-tenant in-box gateway that performs Site-to-Site (VPN), Network Address Translation (NAT), and Forwarding functions. The multiple-tenant portion of the gateway enables Service Providers to reduce the amount of compute resources dedicated to providing the HNV gateway capabilities needed for network virtualization. To do this, we enabled multiple VM networks to use the same gateway VM.

Though this might sound like a pretty straight forward feature, it required some big changes across the networking stack, the virtual switch, and network virtualization. First, to support a multi-tenant gateway, we added the ability to support multiple routing domains within the same VM. This required changes in the Windows networking stack to compartmentalize the different virtual networks providing tenant traffic isolation per compartment and allowing overlapping IP addresses.

To ensure the availability for virtual networks, the key was to make the gateway highly available (because it provides the bridge out of the virtual network). Then we leveraged the existing host and guest clustering capabilities in Windows – this allowed us to build a highly available in-box gateway. In addition to supporting clustering in the gateway, we needed to expand the type of packets that were supported in the VM network because this is required for the clustering heartbeat functionality. In Windows 2012 we only supported IP traffic and we needed to expand this to include ARPs and Duplication Address Detection (DAD) packets. Making these changes also had the fortunate side effect of enabling new scenarios in the network virtualization platform, e.g. bring your own DHCP and host and guest cluster allowing for highly available VMs in a VM network.

Just adding an in-box gateway in Windows Server wasn’t enough. It was important for this to really light up when managed by SCVMM, which brings the automation and management to virtual networks and the gateways. Building on the existing capabilities of being able to create, modify, and remove VM networks, SCVMM makes it seamless to deploy the in-box gateway by shipping a service template. SCVMM also automatically configures the gateway and ensures that even as VMs move around in the data center, the communication in the VM network remains unbroken. This is the type of integration between compute and networking and Windows and SCVMM that provides the flexibility and automation needed by our customers.

Integrated physical and virtual switch management

Another common IT need which we now support is integrated physical and virtual switch management. We heard from customers that the management experience between virtual and physical is disjointed and causes operational issues such as when VLAN configuration is out of sync between the physical switch and the virtual switch. To solve this issue, and others like it, we delivered two things in R2.

First, we introduced new standards-based switch management. This allows admins to manage their switches (physical and virtual) through PowerShell using an industry standard management schema. The standards-based schema allows admins to set and configure ports on the switch, set and configure VLANs, and much more. In addition, with Widows Server 2012 R2, we are extending the Windows Logo program to include network switches that implement this industry standard. Though using Windows PowerShell to manage switch and having a logo certification provide strong customer value on t own, we again want to light up scenarios across Windows and SCVMM to provide the greatest value. Second, we have added the ability in SCVMM to manage network switches. To go back to the example above where VLANs get out of sync between physical and virtual switches, we use the power of this standardized switch management interface plus SCVMM to monitor the VLAN configuration across both the virtual and physical switches, notify the admin if something in the VLAN configuration is out of sync and allow the administrator to automatically fix the misconfiguration.

Scenario 3: Network Infrastructure Enhancements

Networks of virtualized data centers and cloud environments are required to be automated for agility, dynamically scalable and dispensable, and be able to enforce control upon its administration. With the R2 release, IP Address Management (IPAM) implements several major enhancements, including streamlined and unified IP address space management of physical and virtual networks, as well as tighter integration with SCVMM. IPAM in Windows Server 2012 R2 offers granular and customizable role-based access control and delegated administration across multiple data centers. IPAM also provides a single console for monitoring and the management of addressing and naming services across data centers – especially supporting administration of advanced capabilities around continuous availability of IP addressing (with DHCP failover), DHCP Policies, filters, and so on. With the need for integration with other systems and automation, IPAM offers exhaustive Windows PowerShell support and is highly scalable with SQL Server support as its backend. There have also been improvements made to the addressing and naming services, such as DNS supporting per-zone query metrics to support managed services. DHCP includes improvements to DHCP policies with support of FQDN based policy to streamline DNS registration.

For more details about the specific networking feature improvements in R2, see the Transforming Your Data Center – Networking blog post.

In summary, the 2012 R2 wave builds on the networking foundation introduced in Windows Server 2012 and System Center SP1 with improvements in performance and diagnosability, as well as by providing rich SDN scenarios.


Windows Server 2012 was a significant release for Microsoft in terms of delivering a best-in-class cloud computing platform . We’ve continued to invest in developing the industry’s best compute virtualization platform for cloud computing with Windows Server and System Center 2012 R2 – and this includes investments across private, public and hosted cloud environments.

After the release of Windows Server 2012, we received feedback from customers supporting the strides made in enabling higher levels of virtual machine mobility, new data center and deployment architectures, and how Hyper-V Replica could be used to span the bridge between a private cloud and a hosted cloud environment.

As we talked with customers about how they were using Windows Server 2012 and the opportunities they saw ahead of them, it became very clear that the R2 wave of products needed to:

Remove the barriers between private, public and hosted cloud environments and deliver a single compute platform that would work for all environments.

This vision defined the scenario areas we targeted: Increasing Uptime and Performance in Hosted Cloud Environments, Improving Operations in Private Cloud Environments, and Delivering Next Generation Experiences. As shown in Figure 2 (see below), there are many features that further enhance our end-to-end capabilities in these scenario areas.

Figure 2: Compute enhancements in the R2 wave

Scenario 1: Increasing Uptime and Performance in Hosted Cloud Environments

For hosted cloud environments, we have spent a lot of time working on increasing virtual machine uptime and providing advanced storage capabilities for virtual machines. With Windows Server 2012 R2, it is now possible to configure quality of service controls on virtual machines storage, while the virtual machine is running. This means that virtual machines with high storage throughput requirements (for example, a heavy database workload) will not use excessive amounts of storage throughput and slow down other virtual machines in the environment. We also added the ability to create clustered virtual machines using highly available virtual hard disks that are stored on a scale out file server. This allows a Service Provider to offer enterprise-grade, clustered, virtualized workloads, without the need to invest in separate storage hardware, and without exposing their storage infrastructure to the end user. Finally, it is now possible for Service Providers to both expand and reduce the size of virtual hard disks on running virtual machines.

All these investments mean that Service Providers can deliver new capabilities and higher levels of service to their customers without the need to invest in new hardware .

Scenarios 2: Improving Operations in Private Cloud Environments

The amazing level of virtual machine mobility delivered in Windows Server 2012 was one of the most popular additions in that release. Customers have particularly enjoyed being able to deploy updates to clustered environments with zero downtime and minimal administrative oversight. That functionality is thanks to the Cluster Aware Updating functionality provided in this release.

We continued to make significant investments in live migration in Windows Server 2012 R2. For example, live migration with compression provides 2 to 3 times faster live migration with your existing infrastructure, while live migration over SMB Direct provides even faster live migration (with lower CPU utilization) for RDMA-enabled network infrastructures. The end result of this is that maintenance operations and patch deployment in your private cloud environment can be performed in significantly less time. This means that an update that may have taken half a day to deploy to your private cloud environment can now be deployed over a lunch break .

To see this feature in action, watch Jeff Woolsey’s demo here .

We have also added the ability the make exported copies of running virtual machines for easy troubleshooting and diagnosis of issues inside a virtualized environment.

Scenario 3: Delivering Next Generation Experiences

We are also continuing to push the envelope on what is possible with virtual machines by introducing second generation virtual machines – a new type of virtual machine that is UEFI-based and dramatically reduces the use of emulated “legacy” devices.

For administrators who interact directly with virtual machines on a daily basis, we have worked closely with the Remote Desktop team to deliver an entirely new level of integration for virtual machines that provides full Remote Desktop capabilities (such as enhanced graphics, copy and paste, and sound) when interacting with a virtual machine – without requiring network connectivity.

In summary, the R2 wave substantively expands the level of reliability and high performance you can expect from Windows Server, and increases the confidence you can have building and operating your cloud infrastructure when using Windows Server and System Center.


A large number of enhancements around storage came with the 2012 wave of products, including Cluster Shared Volumes (CSV), the new virtual hard disk VHDX format , the Resilient File System (ReFS) , Storage Management Initiative Specification (SMI-S) support, iSCSI Software Target, Server Message Block (SMB) 3.0 , Hyper-V Virtual Fibre Channel , and Chkdsk improvements to name a few. Most notably in the 2012 release was a set of strategic shifts introduced to reduce storage costs. By using software-defined storage with Spaces as a low-cost storage alternative, and file-based storage for application workloads such as Hyper-V, customers could reduce their storage costs significantly.

In planning for R2, one of the clear messages we heard from customers who were responsible for building private cloud and IaaS solutions is that storage represents one of the largest areas of spend . This presents a serious and ongoing challenge for tight enterprise IT budgets, and, for the 2012 R2 release, we attacked this problem head-on. We, approached storage with a goal to reduce both capital expenditure costs and operational expenditure costs through software-defined storage.

This learning helped crystallize our core customer vision for storage in the R2 release:

Reduce $/GB and $/IOPS/GB for private cloud and IaaS storage while delivering high performance and continuous availability

This vision defined the scenario areas we targeted:

  • High Performance Storage Fabric for Compute
  • Scalable Resilient Storage with Cost-Effective Hardware
  • End-to-end Storage Management.

As shown in Figure 3 (see below), there are many features that further enhance our end-to-end capabilities in these scenario areas.

Figure 3: Windows Server 2012 R2 storage enhancements

Scenario 1: High Performance Storage Fabric for Compute

Building on the strategic shifts introduced in Windows Server 2012, our vision for IaaS cloud storage focusses on disaggregated compute and storage. In this model, scale-out of the Hyper-V compute hosts is achieved with a low-cost storage fabric using SMB3 file-based storage, where virtual machines access VHD/VHDX files over low-cost, Ethernet-based networks, as illustrated in Figure 4:

Figure 4: Disaggregated compute and storage

This model enables the Hyper-V compute layer to scale out without incurring the costs of expensive Fiber Channel host bus adapters (HBA) in each server. SMB3 serves as the high performance protocol, enabling VHD’s to be accessed from a Scale-out File Server. IaaS deployments can achieve high performance scale out on low-cost Ethernet or Infiniband fabrics, as well as in a converged networking model. A few of the enhancements in Windows Server 2012 R2 are:

  • Optimized SMB Direct
    SMB Direct is a feature of SMB that can take advantage of RDMA-enabled network cards to achieve blazing fast performance by offloading to the NIC. SMB Direct was first introduced in Windows Server 2012, and it is further optimized in Windows Server 2012 R2 around smaller I/Os. Performance has been increased by up to 50 percent for 8k I/Os.
  • Optimized Rebalancing of Scale-out File Server
    When a VM is started and the VHDX file is opened in Windows Server 2012 R2, the associated SMB session is transitioned to the optimal node. This is done seamless and automatically to ensure that SMB connections deliver the best performance possible.
  • Live Migration over SMB
    Live migration enables you to move a VM run state from one host to another with no perceived downtime to clients, where the run state is copied over the network between the hosts. Now in Windows Server 2012 R2, you can leverage the power of SMB3 when performing a live migration. With an RDMA-enabled NIC, SMB Direct features will enable a live migration to be performed faster, and, more importantly, to have less CPU impact. You can also leverage SMB multi-channel to simultaneously stream the live migration across multiple network cards. With better performance and lower impact to the system, live migration with SMB is a win-win partnership.
  • SMB Bandwidth Management
    SMB is becoming a common infrastructure for a number of components used across a wide range of scenarios. Effected scenarios include: copying VHDX files to the Hyper-V hosts, to I/O redirection with Cluster Shared Volumes, to live migrations, to the file based storage that services up the VHDXs. However, not all SMB traffic is equal, and different usages have different levels of importance. For example, provisioning a new VHDX from a VM library to a Hyper-V host is not as important as providing the VMs that are already running on that host access to their VHDXs. SMB bandwidth management allows you to have different categories of SMB traffic and to define bandwidth limits per category.

The combination of these features delivers a high performance and scalable file-based storage infrastructure at low cost.

Scenario 2: Scalable Resilient Storage with Cost-Effective Hardware

In Windows Server 2012 R2, we continue the journey with Spaces delivering high-performance, resilient storage on inexpensive hardware through the power of software . Windows Server 2012 R2 delivers many of the high-end features you expect from expensive storage arrays, such as:

  • Storage Spaces Tiering
    This provides support for tiering of storage where data access is heat-mapped and the most commonly accessed data lives on high performance SSD’s and cold data lives on cheaper HDD spinning media. This delivers the best of both worlds with SSD performance and low-cost HDD capacity, and data is automatically moved between the tiers. Files can also be explicitly pinned to a particular tier. For example, a golden VHD image can be pinned in its entirety to the fast SSD tier. You can see this functionality in Jeff’s demo (as also noted above) here .
  • Storage Spaces Write-back Cache
    This is another new Spaces feature where writes to the storage are satisfied from the fast SSD tier and then later written back to the HDD layer. This boosts performance of applications by enabling them to quickly get their data committed to disk, allowing them to move forward and then Spaces will later determine where the best place is for that data.
  • Enhanced Data Deduplication
    Deduplication was introduced in Windows Server 2012, but has been taken to the next level in Windows Server 2012 R2. Deduplication can now optimize a running virtual machine, so that it can be extended to work in more scenarios. Deduplication can also be used on Cluster Shared Volumes. Several changes speed up deduplication optimization, and read/write of optimized files is faster. In short, deduplication optimizes storage and enables you to do more with less storage.

The combination of these features will deliver the opportunity for significantly reduced CapEx costs for a scalable, resilient, cloud storage platform.

Scenario 3: End-to-end Storage Management

Another focus area was to reduce the operational costs associated with deploying and managing a Windows Server 2012 R2 cloud storage infrastructure. SCVMM now provides end-to-end management of the entire IaaS storage infrastructure. Some of the most notable enhancements are:

  • Storage Spaces Management
    SCVMM now fully manages Storage Spaces to provision and manage Spaces and Pools.
  • Storage Management API (SM-API)
    The foundation for a unified management experience is in our Storage Management API (SM-API). A single management API that can manage SANs, Storage Spaces, and everything in-between.
  • Broad Storage Management
    In our efforts to reduce storage costs, we recognize that the cheapest storage may be the storage you’ve already purchased. This was fundamental to SCVMM managing all types of storage to reduce operational costs no matter what your storage preference ( i.e. SANs, Spaces, or File Based Storage).
  • Scale-out File Server Provisioning
    SCVMM can now completely deploy and configure a set of clustered scale-out file servers. It does bare-metal provisioning of the OS, configures the file server roles, clusters them, and makes them a scale-out file server. You can create and configure permissions on shares, and then configure the Hyper-V hosts and their VMs on those shares and disks. This is complete end-to-end management in a single management console .

This simplified and consolidated end-to-end management experience is part of our commitment to the core value of delivering a complete experience that combines the power of Windows Server 2012 R2 with the manageability of Systems Center 2012 R2 enable reduced operational expenses (OpEx).

When you combine all the pieces of software-defined storage as discussed above, Windows Server 2012 R2 will further reduce $/GB and $/IOPS/GB for private cloud and IaaS storage while delivering traditional high-end value such as continuous availability.

* * *

This in-depth look at the IaaS capabilities in the 2012 R2 wave of products is pretty jaw dropping, and, to get even deeper into these amazing innovations I recommend taking a look at the content featured in the “Next Steps” section below.

The work being done with R2’s infrastructure innovations are new benchmark for the scalable, flexible, powerful cloud computing era.

We’ll get into this even more in the second half of our IaaS overview tomorrow.

- Brad

Next Steps

To learn more about the topics covered in this post, check out the following articles covering Networking and Storage.  Also, don’t forget to start your evaluation of the 2012 R2 previews today!



Version history
Last update:
‎Sep 07 2018 10:09 PM
Updated by: