virtualization
10 TopicsAsk The Perf Guy: What’s The Story With Hyperthreading and Virtualization?
There’s been a fair amount of confusion amongst customers and partners lately about the right way to think about hyperthreading when virtualizing Exchange. Hopefully I can clear up that confusion very quickly. We’ve had relatively strong guidance in recent versions of Exchange that hyperthreading should be disabled. This guidance is specific to physical server deployments, not virtualized deployments. The reasoning for strongly recommending that hyperthreading be disabled on physical deployments can really be summarized in 2 different points: The increase in logical processor count at the OS level due to enabling hyperthreading results in increased memory consumption (due to various algorithms that allocate memory heaps based on core count), and in some cases also results in increased CPU consumption or other scalability issues due to high thread counts and lock contention. The increased CPU throughput associated with hyperthreading is non-deterministic and difficult to measure, leading to capacity planning challenges. The first point is really the largest concern, and in a virtual deployment, it is a non-issue with regard to configuration of hyperthreading. The guest VMs do not see the logical processors presented to the host, so they see no difference in processor count when hyperthreading is turned on or off. Where this concern can become an issue for guest VMs is in the number of virtual CPUs presented to the VM. Don’t allocate more virtual CPUs to your Exchange server VMs that are necessary based on sizing calculations. If you allocate extra virtual CPUs, you can run into the same class of issues associated with hyperthreading on physical deployments. In summary: If you have a physical deployment, turn off hyperthreading. If you have a virtual deployment, you can enable hyperthreading (best to follow the recommendation of your hypervisor vendor), and: Don’t allocate extra virtual CPUs to Exchange server guest VMs. Don’t use the extra logical CPUs exposed to the host for sizing/capacity calculations (see the hyperthreading guidance at https://aka.ms/e2013sizing for further details on this). Jeff Mealiffe Principal PM Manager Office 365 Customer Experience42KViews1like4CommentsStorage Validation in A Virtual World
Deploying Exchange can be a challenge. Particularly when you are all ready to validate your servers & storage with Jetstress and you realize that even though we suggest that you should always run Jetstress prior to going into production, you discover that we don’t support running Jetstress in a virtual machine on that fancy new virtual platform you just deployed. Ouch. Now what? First, some background. You might be wondering why we don’t support running Jetstress in a virtual machine. The reason is actually quite straightforward. Over the years as we have worked with customers and partners who were either deploying new hardware for Exchange or validating Exchange storage solutions in the Exchange Solution Reviewed Program (ESRP), we saw a number of examples of Jetstress test results where the reported IO latency numbers were wildly inaccurate. Given the lack of trust in the reported performance metrics, we had to ensure that Jetstress was not run in this configuration. This resulted in the guidance that customers deploying on virtual infrastructure should validate storage performance by running Jetstress in the root rather than in a guest virtual machine. While this was a feasible workaround with Hyper-V, it’s not a realistic solution for other hypervisors. Just as the Exchange product has matured, the hypervisor products that some of our customers use to manage their Exchange infrastructure have matured as well, and we decided that the time had come to do some new testing and see if those strange performance results of the past would come to haunt us again. After weeks of automated testing with multiple hypervisors and well over 100 individual Jetstress tests completed in various configurations, we’ve reached a conclusion… Effective immediately, we support running the Microsoft Exchange Server Jetstress 2010 tool in virtual guest instances which are deployed on one of the following hypervisors: Microsoft Windows Server 2008 R2 (or newer) Microsoft Hyper-V Server 2008 R2 (or newer) VMware ESX 4.1 (or newer) Additionally, we are removing the restriction in the ESRP v3.0 program on using virtual machines, so from this point on our storage partners will be able to submit ESRP solutions for Exchange Server 2010 where the validation testing was performed on a virtual machine. As a reminder, the best place to learn about supportability for Exchange Server 2010 virtualization is on TechNet in the Hardware Virtualization section of the System Requirements topic. Additionally, we have published a Best Practices for Virtualizing Exchange Server 2010 with Windows Server 2008 R2 Hyper-V whitepaper that contains many helpful deployment recommendations. The best resource for understanding how to properly use Jetstress for storage and solution validation is the Jetstress Field Guide, which has been recently updated to include this change to our support for guest virtual machines. I hope this is good news for some of you and that this will result in simpler, easier, and more thorough pre-production validation of your Exchange deployments. Jeff Mealiffe Senior Program Manager Exchange Customer Experience3.9KViews0likes2CommentsAsk the Perf Guy: How big is too BIG?
We’ve seen an increasing amount of interest lately in deployment of Exchange 2013 on “large” servers. By large, I mean servers that contain significantly more CPU or memory resources than what the product was designed to utilize. I thought it might be time for a reminder of our scalability recommendations and some of the details behind those recommendations. Note that this guidance is specific to Exchange 2013 – there are many architectural differences in prior releases of the product that will impact scalability guidance. In a nutshell, we recommend not exceeding the following sizing characteristics for Exchange 2013 servers, whether single-role or multi-role (and you are running multi-role, right?). Recommended Maximum Processor Core Count 24 Recommended Maximum Memory 96 GB Note: Version 7.5 and later of the Exchange Server 2013 Role Requirements Calculator aligns with this guidance and will flag server configurations that exceed these guidelines. As we have mentioned in various places like TechNet and our Preferred Architecture, commodity-class 2U servers with 2 processor sockets are our recommended server type for deployment of Exchange 2013. The reason for this is quite simple: we utilize massive quantities of these servers for deployment in Exchange Online, and as a result this is the platform that we architect for and have the best visibility into when evaluating performance and scalability. You might now be asking the fairly obvious follow up question: what happens if I ignore this recommendation and scale up? It’s hard, if not impossible, to provide a great answer to this question, because there are so many things that could go wrong. We have certainly seen a number of issues raised through support related to scale-up deployments of Exchange in recent months. An example of this class of issue appears in the “Oversizing” section of Marc Nivens’ recent blog article on troubleshooting high CPU issues in Exchange 2013. Many of the issues we see are in some way related to concurrency and reduced throughput due to excessive contention amongst threads. This essentially means that the server is trying to do so much work (believing that it has the capability to do so given the massive amount of hardware available to it) that it is running into architectural bottlenecks and actually spending a great deal of time dealing with locks and thread scheduling instead of handling transactions associated with Exchange workloads. Because we architect and tune the product for mid-range server hardware as described above, no tuning has been done to get the most out of this larger hardware and avoid this class of issues. We have also seen some cases in which the patterns of requests being serviced by Exchange, the number of CPU cores, and the amount of physical memory deployed on the server resulted in far more time being spent in the .NET Garbage Collection process than we would expect, given our production observations and tuning of memory allocation patterns within Exchange code. In some of these cases, Microsoft support engineers may determine that the best short-term workaround is to switch one or more Exchange services from the Workstation Garbage Collection mode to Server Garbage Collection mode. This allows the .NET Garbage Collector to manage memory more efficiently but with some significant tradeoffs, like a dramatic increase in physical memory consumption. In general, each individual service that makes up the Exchange server product has been tuned as carefully as possible to be a good consumer of memory resources, and wherever possible, we utilize the Workstation Garbage Collector to avoid a dramatic and typically unnecessary increase in memory consumption. While it’s possible that adjusting a service to use Server GC rather than Workstation GC might temporarily mitigate an issue, it’s not a long-term fix that the product group recommends. When it comes to .NET Garbage Collector settings, our advice is to ensure that you are running with default settings and the only time these settings should be adjusted is with the advice and consent of Microsoft Support. As we make changes to Exchange through our normal servicing rhythm, we may change these defaults to ensure that Exchange continues to perform as efficiently as possible, and as a result, manual overrides could result in a less optimal configuration. As server and processor technology changes, you can expect that we will make adjustments to our production deployments in Exchange Online to ensure that we are getting the highest performance possible at the lowest cost for the users of our service. As a result, we anticipate updating our scalability guidance based on our experience running Exchange on these updated hardware configurations. We don’t expect these updates to be very frequent, but change to hardware configurations is absolutely a given when running a rapidly growing service. It’s a fact that many of you have various constraints on the hardware that you can deploy in your datacenters, and often those constraints are driven by a desire to reduce server count, increase server density, etc. Within those constraints, it can be very challenging to design an Exchange implementation that follows our scalability guidance and the Preferred Architecture. Keep in mind that in this case, virtualization may be a feasible option rather than a risky attempt to circumvent scalability guidance and operate extremely large Exchange servers. Virtualization of Exchange is a well understood, fairly common solution to this problem, and while it does add complexity (and therefore some additional cost and risk) to your deployment, it can also allow you to take advantage of large hardware while ensuring that Exchange gets the resources it needs to operate as effectively as possible. If you do decide to virtualize Exchange, remember to follow our sizing guidance within the Exchange virtual machines. Scale out rather than scale up (the virtual core count and memory size should not exceed the guidelines mentioned above) and try to align as closely as possible to the Preferred Architecture. When evaluating these scalability limits, it’s really most important to remember that Exchange high availability comes from staying as close to the product group’s guidance and Preferred Architecture as possible. We want you to have the very best possible experience with Exchange, and we know that the best way to achieve that is to deploy like we do. Jeff Mealiffe Principal PM Manager Office 365 Customer Experience122KViews0likes2CommentsDemystifying Exchange 2010 SP1 Virtualization
It’s been a few months since we announced some major changes to our virtualization support statements for Exchange 2010 (see Announcing Enhanced Hardware Virtualization Support for Exchange 2010). Over that time, I’ve received quite a few excellent questions about particular deployment scenarios and how the changes to our support statements might affect those deployments. Given the volume of questions, it seemed like an excellent time to post some additional information and clarification. First of all, a bit of background. When we made the changes to our support statements, the primary thing we wanted to ensure was that our customers wouldn’t get into a state where Exchange service availability might be reduced as a result of using a virtualized deployment. To put it another way, we wanted to make sure that the high level of availability that can be achieved with a physical deployment of the Exchange 2010 product would not in any way be reduced by deploying on a virtualization platform. Of course, we also wanted to ensure that the product remained functional and that we verified that the additional functionality provided by the virtualization stack would not provide an opportunity for loss of any Exchange data during normal operation. Given these points, here’s a quick overview of what we changed and what it really means. With Exchange 2010 SP1 (or later) deployed: All Exchange 2010 server roles, including Unified Messaging, are supported in a virtual machine. Unified Messaging virtual machines have the following special requirements: Four virtual processors are required for the virtual machine. Memory should be sized using standard best practices guidance. Four physical processor cores are available for use at all times by each Unified Messaging role virtual machine. This requirement means that no processor oversubscription can be in use. This requirement affects the ability of the Unified Messaging role virtual machine to utilize physical processor resources. Exchange server virtual machines (including Exchange Mailbox virtual machines that are part of a DAG ), may be combined with host-based failover clustering and migration technology, as long as the virtual machines are configured such that they will not save and restore state on disk when moved, or taken offline. All failover activity must result in a cold boot when the virtual machine is activated on the target node. All planned migration must either result in shutdown and cold boot, or an online migration that makes use of a technology like Hyper-V Live Migration. Hypervisor migration of virtual machines is supported by the hypervisor vendor; therefore, you must ensure that your hypervisor vendor has tested and supports migration of Exchange virtual machines. Microsoft supports Hyper-V Live Migration of these virtual machines. Let’s go over some definitions to make sure we are all thinking about the terms in those support statements in the same way. Cold boot This refers to the action of bringing up a system from a power-off state into a clean start of the operating system. No operating system state has been persisted in this case. Saved state When a virtual machine is powered off, hypervisors typically have the ability to save the state of the virtual machine at that point in time so that when the machine is powered back on it will return to that state rather than going through a “cold boot” startup. “Saved state” would be the result of a “Save” operation in Hyper-V. Planned migration When a system administrator initiates the move of a virtual machine from one hypervisor host to another we call this a planned migration. This could be a single migration, or a system admin could configure some automation that is responsible for moving the virtual machine on a timed basis or as a result of some other event that occurs in the system other than hardware or software failure. The key point here is that the Exchange virtual machine is operating normally and needs to be relocated for some reason – this can be done via a technology like Live Migration or vMotion. If the Exchange virtual machine or the hypervisor host where the VM is located experiences some sort of failure condition, then the result of that would not be “planned”. Virtualizing Unified Messaging Servers One of the changes made was the addition of support for the Unified Messaging role on Hyper-V and other supported hypervisors. As I mentioned at the beginning of this article, we did want to ensure that any changes we made to our support statement resulted in the product remaining fully functional and providing the best possible service to our users. As such, we require Exchange Server 2010 SP1 to be deployed for UM support. The reason for this is quite straightforward. The UM role is dependent on a media component provided by the Microsoft Lync team. Our partners in Lync did some work prior to the release of Exchange 2010 SP1 to enable high quality real-time audio processing in a virtual deployment, and in the SP1 release of Exchange 2010 we integrated those changes into the UM role. Once that was accomplished, we did some additional testing to ensure that user experience would be as optimal as possible and modified our support statement. As you’ll notice, we do have specific requirements around CPU configuration for virtual machines (and hypervisor host machines) where UM is being run. This is additional insurance against poor user experience (which would show up as poor voice quality). Host-based Failover Clustering & Migration Much of the confusion around the changed support statement stems from the details on combination of host-based failover clustering & migration technology with Exchange 2010 DAG s). The guidance here is really quite simple. First, let’s talk about whether we support third-party migration technology (like VMware’s vMotion). Microsoft can’t make “support” statements for the integration of 3rd-party hypervisor products using these technologies with Exchange 2010, as these technologies are not part of the Server Virtualization Validation Program (SVVP) which covers the other aspects of our support for 3rd-party hypervisors. We make a generic statement here about support, but in addition you need to ensure that your hypervisor vendor supports the combination of their migration/clustering technology with Exchange 2010. To put it as simply as possible: if your hypervisor vendor supports their migration technology with Exchange 2010, then we support Exchange 2010 with their migration technology. Second, let’s talk about how we define host-based failover clustering. This refers to any sort of technology that provides automatic ability to react to host-level failures and start affected VMs on alternate servers. Use of this technology is absolutely supported within the provided support statement given that in a failure scenario, the VM will be coming up from a cold boot on the alternate host. We want to ensure that the VM will never come up from saved state that is persisted on disk, as it will be “stale” relative to the rest of the DAG members. Third, when it comes to migration technology in the support statement, we are talking about any sort of technology that allows a planned move of a VM from one host machine to another. Additionally, this could be an automated move that occurs as part of resource load balancing (but is not related to a failure in the system). Migrations are absolutely supported as long as the VMs never come up from saved state that is persisted on disk. This means that technology that moves a VM by transporting the state and VM memory over the network with no perceived downtime are supported for use with Exchange 2010. Note that a 3rd-party hypervisor vendor must provide support for the migration technology, while Microsoft will provide support for Exchange when used in this configuration. In the case of Microsoft Hyper-V, this would mean that Live Migration is supported, but Quick Migration is not. With Hyper-V, it’s important to be aware that the default behavior when selecting the “Move” operation on a VM is actually to perform a Quick Migration. To stay in a supported state with Exchange 2010 SP1 DAG members, it’s critical that you adjust this behavior as shown in the VM settings below (the settings displayed here represent how you should deploy with Hyper-V): Figure 1: The correct Hyper-V virtual machine behavior for Database Availability Group members Let’s review. In Hyper-V, Live Migration is supported for DAG members, but Quick Migration is not. Visually, this means that this is supported: Figure 2: Live Migration of Database Availability Group member in Hyper-V is supported (see large screenshot) And this is not supported: Figure 3: Quick Migration of Database Availability Group members is not supported Hopefully this helps to clarify our support statement and guidance for the SP1 changes. We look forward to any feedback you might have! Jeff Mealiffe8.6KViews0likes10CommentsAnnouncing Enhanced Hardware Virtualization Support for Exchange 2010
The Microsoft Exchange team is enhancing positioning by including additional supported scenarios regarding Exchange Server 2010 running under hardware virtualization software. As of today, the following support scenarios are being updated, for Exchange 2010 SP1, and later: The Unified Messaging server role is supported in a virtualized environment. Combining Exchange 2010 high availability solutions (database availability groups (DAGs)) with hypervisor-based clustering, high availability, or migration solutions that will move or automatically failover mailbox servers that are members of a DAG between clustered root servers, is now supported. Due to improvements we made in Exchange Server 2010 SP1, along with more comprehensive testing of Exchange 2010 in a virtualized environment, we are happy to provide this additional deployment flexibility to our customers. The updated support guidance applies to any hardware virtualization vendor participating in the Windows Server Virtualization Validation Program (SVVP). In addition, we are also releasing the Best Practices for Virtualizing Exchange Server 2010 with Windows Server 2008 R2 Hyper-V whitepaper. This whitepaper is designed to provide technical guidance on Exchange server roles, capacity planning, sizing and performance, as well as high availability best practices. Many customers, such as Convergent Computing, Swisscom, LUKoil Baltija R, Forest Preserve District of DuPage County, and LAWIN, are already saving money through server consolidation and licensing costs running Exchange Server 2010 in a virtualized environment. Complete system requirements for Exchange Server 2010 running under hardware virtualization software can be found in Exchange 2010 System Requirements. Also, the support policy for Microsoft software running in non-Microsoft hardware virtualization software can be found here. Kevin Allison General Manager Exchange Customer Experience15KViews0likes25CommentsUpcoming Webcast: Best Practices for Virtualizing Microsoft Exchange
We've got a great webcast coming up next week to discuss recommendations for virtualizing Exchange server and the benefits of choosing Hyper-V + System Center as your virtualization solution. TechNet Webcast: Microsoft Virtualization Best Practices for Exchange Server (Level 300) Wednesday, Nov. 4 at 10am Pacific time Virtualizing business critical applications will deliver significant customer benefits including cost savings, enhanced business continuity and an agile and efficient management solution. This session will focus on virtualizing Exchange using Microsoft solutions, and guidance for virtualizing Exchange for various Production scenarios. We will go into technical details with best practices.1.3KViews0likes9CommentsMicrosoft Virtualization: Best Choice for Exchange Server
Virtualization continues to be a hot topic for many of you with questions raised around whether Exchange should be virtualized. We started talking about this back in January (http://msexchangeteam.com/archive/2009/01/22/450463.aspx) and continue to recommend Microsoft virtualization technologies for these deployment scenarios. For example, the Exchange team recommends Microsoft virtualization (Hyper-V + System Center) for customers who want to virtualize their underutilized CAS, Hub and mailbox roles. Additionally, the Edge Transport role along with other security gateways on the edge server can be considered for virtualization to maximize hardware utilization. Studies have shown that in mixed physical/virtual Exchange environments, virtualization can deliver significant benefits including reduced server hardware costs, power and space savings, improved server utilization and rapid server provisioning. Additionally, by choosing MS virtualization (Hyper-V + System Center) customers benefit from a lower cost solution (both up front and ongoing) that is already part of Windows Server and an integrated end to end management solution for both physical and virtual environments. Whether you install on physical hardware or virtual machines, Exchange Server and Windows Server + System Center provide the best solution for you. For more information on virtualizing Exchange and other Microsoft server applications please visit http://www.microsoft.com/virtualization/solutions/business-critical-applications. Also check out the latest MS virtualization blog here. - The Exchange Team2.1KViews0likes11CommentsWhite Paper: Comparing the Power Utilization of Native and Virtual Exchange Environments
Is reducing or controlling the high cost of the power to run and cool computer hardware is a top priority for your organization? Are you considering server virtualization solutions to reduce your server footprint and the associated power and cooling costs? Because the virtualization of Microsoft Exchange servers rarely results in a reduction of physical processors, there is some question as to whether there is significant hardware, power, cooling, or space savings from virtualizing correctly-sized Exchange Server 2007 server roles. The answer to this question can be found in a new White Paper we just released about a study that was done internally, entitled "Comparing the Power Utilization of Native and Virtual Exchange Environments." This study compared the power utilization of native and virtual Exchange server environments in a scenario in which the number of physical servers was reduced from eight to two, but the total number of logical processors and the amount of memory remained the same. It examined power utilization of native and virtual Exchange 2007 environments in a scenario where physical servers were reduced from 8 to 2 but the total number of logical processors remained constant at 32. There was no processor core consolidation, and storage power utilization was not included. In this scenario, there was a 50 percent reduction in server power utilization and a projected savings of 8,582 kWh/year. For more details about the study and its conclusions, check out the White Paper, "Comparing the Power Utilization of Native and Virtual Exchange Environments."2KViews0likes3CommentsShould You Virtualize Your Exchange 2007 SP1 Environment?
Introduction With the release of Microsoft Windows Server 2008 with Hyper-V and Microsoft Hyper-V Server 2008, a virtualized Exchange 2007 SP1 server is no longer restricted to the realm of the lab; it can be deployed in a production environment and receive full support from Microsoft. This past August, we published our support policies and recommendations for virtualizing Exchange, but many people have asked us to go beyond that guidance and weigh-in on the more philosophical question: is virtualization is a good idea when it comes to Exchange? Due to the performance and business requirements of Exchange, most deployments would benefit from deployment on physical servers. However, there are some scenarios in which a virtualized Exchange 2007 infrastructure may allow you to realize real benefits in terms of space, power, and deployment flexibility. Presented here are sample scenarios in which virtualization may make sense, as well as checklists to help you evaluate whether the current load on your infrastructure makes it a good candidate for virtualization. Scenarios Small Office with High Availability Some organizations are small but they still require enterprise-class availability. For example, consider Contoso Ltd., a fictitious company that regards email as a critical service and has several small branch office site(s) consisting of 250 users. Contoso wants to keep their e-mail environment on-premises for legal reasons and they want to have a fully redundant email system. Contoso's users have average user profiles and the mailboxes are sized at 2 GB. Before Hyper-V was introduced, to get full redundancy for all Exchange server roles, Contoso would have to deploy 7 physical servers: 2 servers for AD and DNS, 1 server for file and print, 2 servers running CAS and Hub, and 2 servers running the Mailbox role in a CCR environment. The servers are assumed to have 2 quad core processors and the amount of RAM is based on the installed roles. Each CCR node would have 4 GB of RAM, and each of the other servers would have the minimum 2 GB of RAM to support the users and traffic pattern. With user profiles this size, the average load would be 35-45% of CPU on a server that has eight cores. Flip the page to today and Contoso can give the same level of redundancy and availability with only 3 servers by using virtualization. Each physical server would run each of the roles as a Hyper-V guest. In this scenario, 3 physical servers with 2 quad core processors and 16 GB of RAM would have sufficient capacity to serve Contoso's users. Along with RAM and processors, the servers need to have multiple NICs and redundant paths for storage. Since Contoso still has the same number of Exchange servers to manage, they have not benefitted much from the O&M perspective, but think about the space, power, and HVAC impact. Each of the virtual servers would be configured with 2 virtual CPUs. The following diagram illustrates the scenario. Note that the Hub Transport Exchange 2007 Server serving as the File Share Witness (FSW) is on a separate Hyper-V host from the CCR Mailbox Nodes to eliminate any single point of failure in the clustering solution and to provide true clustering capability. Figure 1 - Possible Small Branch Office Design for Exchange 2007 SP1 on Hyper-V In this scenario, you could save a possible 25,754 kWh and $22,516 per year. This information was based on having 7 physical servers and then changing to 3 physical and 7 virtual servers. The Microsoft HyperGreen Tool was used to gather these numbers: The factors for sizing the virtualized server are not any different from sizing physical servers. No matter if you are using physical or virtualized servers, the CCR nodes will need 4GB of RAM and they would need to support 48 IOPS for the databases and 19 IOPS for the transaction logs. As you can see for this scenario, the IOPs requirements are very low and should be able to be maintained in a virtualized environment. For the small amount of users that would be hosted on the virtualized Exchange servers, you would be fine to use VHDs for the drives. It is recommended that you move to external storage if your user count is much higher than what is depicted here. Remote or Branch Office with High Availability In the early days of Exchange server, organizations needed to place local Exchange servers in remote and branch offices to provide sufficient performance. With improvements such as Cached Exchange Mode and Outlook Anywhere (RPC over HTTPS), consolidating those servers to a central datacenter became the recommended approach. However, in some situations, poor network connectivity to remote offices still requires some organizations to have a local Exchange server. Often the user populations at these locations are so small that it doesn't make sense to dedicate a whole physical server to the Exchange environment. The technical considerations in this scenario are the same as described in the "Small Office with High Availability" scenario above. For an example of how a company used Hyper-V in this scenario, refer to the case study on Caspian Pipeline Consortium. Disaster Recovery In order to provide redundancy for a remote site, some organizations may require a Warm Site that contains a duplicate of the primary production Exchange 2007 infrastructure. The intent of this standby site is to provide as near to the same level of functionality as possible in the event of the loss of the primary site. However, keeping a duplicate infrastructure for standby purposes, although useful for high SLA requirements, can be prohibitively expensive for some organizations. In that event, it is possible to provide a virtual duplicate of the entire primary site using Hyper-V. A typical Warm Site configuration utilizing physical Exchange 2007 servers would include one or more servers configured together as a standby cluster and one or more other servers configured as a CAS/Hub server. To achieve redundancy of just the messaging services within the Warm Site, a total of four physical servers would be needed. By contrast, a Hyper-V-based solution with only three physical servers can provide an organization with a Warm Site that includes two Mailbox servers in a CCR environment, as well as and redundant CAS, and Hub servers. Thus, by virtualizing Exchange in this scenario, you can provide a higher level of services to your users while also saving on hardware, power and cooling costs as well as space requirements when compared to a similarly configured physical solution. The following diagram illustrates one such configuration. Figure 2 - Possible Warm Site Disaster Recovery Configuration using Hyper-V The Primary Site uses physical hardware due to the demanding size and profile of the user population. In this scenario, the Warm Site is designed to support the entire population of users from the Primary Site. Careful consideration must be given therefore to the configuration of a virtual environment that will support the user population, even if it will be for a temporary period and at a certain level of reduced service performance. The diagram illustrates that the Warm Site would rely on a standby cluster, with one of the nodes configured as an SCR target, as the primary recipient of regular log copies from the primary site. A pair virtual Domain Controllers would also be in the Hyper-V environment for AD integration. The SCR target is a two-node failover cluster. In the event of a site failure, the standby cluster would be activated by using Restore-StorageGroupCopy and the CMS would be recovered by using the /recoverCMS switch. The same procedures for recovering from a disaster using a standby cluster still apply despite the fact that the standby cluster is virtualized. Once the standby cluster is online and hosting the CMS from the failed site, client access to messaging services and data will be restored once DNS and Active Directory replication has occurred. The virtual Warm Site must be able to provide an adequate level of service to users in the event of the loss of the Primary Site with the understanding that there will probably be a reduced level of service due to the WAN/Internet link(s) to the Warm Site. However, since the site is designed to provide emergency functionality, and only for a brief period, this reduced level of service should be a reasonable expectation. It would be understood however that, while the Primary Site is down, there is site no resilience for the Warm Site. In this scenario, you could save a possible 33,005 kWh and $28,225 per year. This information was based on having 8 physical servers and then changing to 3 physical and 8 virtual servers. The Microsoft HyperGreen Tool was used to gather the numbers above. Mobile LAN There are situations in which a company, agency, or governmental department may need a complete network infrastructure that can be deployed to specific locations at a moment's notice. This infrastructure is then connected to the organization's network via satellite or similar remote WAN technology. For example, a non-governmental organization (NGO) may need to react to a disaster and set up local servers to serve an affected community. This subset of servers would need to be completely self-contained and able to provide all necessary server services to the personnel located in the target location. In such situations, the mobile Local Area Network must be easy to transport. It must have a small and highly efficient form factor, providing all of the local users' necessary services while taking up the smallest amount of space possible. Given the probable remoteness of the locations in which the Mobile LAN would be deployed, it must also provide fault tolerant capabilities to ensure that there is no single point of failure in a location where spare parts may be hard to come by. In this scenario, a Hyper-V server can be used to host Exchange as well as file server services and domain infrastructure services in a compact form factor. Virtualization of Exchange 2007 with Service Pack 1 requires that the Hyper-V host server does not host any other IO-intensive applications installed on the host server. You can have Exchange 2007 SP1 and other applications running as VMs on the same host. The following diagram illustrates a possible configuration for a Hyper-V server hosting the Exchange 2007 and Domain infrastructure systems. Due to the nature of this scenario, you will see that none of the Exchange server roles have been combined. Figure 3 - Possible Mobile LAN Configuration with Exchange 2007 SP1 The diagram illustrates a comprehensive network solution for an organization that requires all necessary server services to be provided locally regardless of location. The solution is as small as possible while also allowing for a high degree of fault tolerance and system availability. In the rack, you would also need enough network infrastructure equipment to support the Hyper-V servers and the workstations. The Hyper-V systems would use either an iSCSI or Fiber Channel Storage Area Network (SAN). The SAN should give enough spindles to provide the necessary performance for the guest systems. With this scenario, you would have everything you needed in a 42U rack. In this scenario, you could save a possible 91,012 kWh and $73,891 per year. This information was based on having 14 physical servers and then changing to 3 physical and 14 virtual servers. The Microsoft HyperGreen Tool was used to gather the numbers above. Technical Checklists You'll notice that in each of the scenarios described above, if Exchange were deployed on physical infrastructure without Hyper-V, hardware resources would have been underutilized. To help you determine if your Exchange environment is a candidate for server consolidation, we've prepared the following checklists. If these checklists reveal that your hardware is not being fully utilized, you should consider the following possible actions: If you are a small organization, you may be able to reduce your server footprint using virtualization, down to as few as three physical servers with full redundancy of Exchange roles If the underutilized hardware is at a branch or remote office that cannot be consolidated to a central datacenter, you may be able to reduce your server footprint there using virtualization In other situations, you may want to make your Exchange environment a bit leaner by revisiting how well your server capacity matches up with your user load. You can reduce the number of physical servers in order to boost utilization to the desired levels--implementing virtualization is not required Keep in mind that underutilized hardware is simply a signal that your Exchange environment has excess capacity. This may be by design (to accommodate usage spikes or expected growth) or by accident. Some "breathing room" is desirable, and we have factored this into our checklists. Also, keep in mind that hardware utilization is not the only factor to consider when deciding whether to use virtualization with Exchange. Adding virtualization to an Exchange environment introduces additional complexity in a number of areas, including backup, monitoring, and storage configurations (see the TechNet article for details). Checklist #1 - Performance Counters The following checklist outlines performance metrics to monitor that may indicate that your Exchange environment may lack the resource intensive performance that requires a purely physical Exchange 2007 infrastructure. These counters need to be gathered from physical servers, not virtualized servers. As the main planning for Exchange 2007 SP1 infrastructure revolves in great part around planning for the Mailbox Server, the checklist deals mostly with Mailbox Server-based metrics. The idea with these counters is to collect data for a minimum of 1 week. Once the data has been gathered you can compare the results to the expected values. If the observed values are less than the expected values below, then your server hardware is being underutilized. The counters in the table below are from the TechNet article Exchange 2007 Monitoring without System Center Operations Manager. You will need to make sure that you monitor your Hyper-V servers after they are put into production. You can monitor "Processor Performance" on Windows 2008 so you know the operating system is not slowing down your processors frequency. This could lead you to believe you have high CPU utilization, when in fact you have low CPU utilization and the processors are just throttled down to save power. Category Object\Counter Expected Value Present Common Performance Counters (All Exchange Servers) Processor\% Total Should be less than 40% average [ ] System\Processor Queue Length (all instances) Should be less than 5 (per processor) [ ] Network Interface(*)\Bytes Total/sec For a 1000-Mbps network adapter, should be below 30-35 Mbps [ ] Mailbox Server-Specific Performance Counters MSExchangeIS Client (*)\RPC Average Latency Should be less than 30 ms on average [ ] Process(Microsoft.Exchange.Search.ExSearch)\% Processor time Should be less than 1% of overall CPU typically and not sustained above 3% [ ] MSExchange Store Interface(_Total)\RPC Latency average (msec) Should be less than 100 ms at all times [ ] MSExchange Store Interface(_Total)\RPC Requests outstanding Should be 0 at all times [ ] CCR, LCR and SCR Mailbox Server -Specific Performance Counters MSExchange Replication(*)\CopyQueueLength Should be less than 10 at all times for CCR and SCR. Should be less than 1 at all times for local continuous replication (LCR). [ ] Table 1 - Performance Counters Checklist Checklist #2 - User Profile Factors A less time-consuming (and less precise) way of judging whether your whether the load on your servers is fully utilizing your hardware, is to compare processor cores to user profiles, using the general sizing guidelines provided in the topic, Planning Processor Configurations. To determine the user profile of your organization, refer to the chart in the article (also reprinted below). User type (usage profile) Send/receive per day approximately 50-kilobyte (KB) message size Light 5 sent/20 received Average 10 sent/40 received Heavy 20 sent/80 received Very heavy 30 sent/120 received Table 2 - Knowledge Worker Profiles for Outlook Users As noted in the article, a rule of thumb for sizing is that 1,000 active average profile mailboxes will require a 1 x processor core (for example, a 4,000-mailbox server with an average usage profile requires 4 x processor cores). A heavy usage profile requires more processor cycles than an average profile, thus for planning purposes, use 750 active, heavy-profile mailboxes per processor core. Using this logic, we can summarize how many users are needed to fully utilize a processor core: Category Indicator Expected Value Present Light User Profile Recommended Number \ Processor Core = 2000 = 1,000 [ ] Average User Profile Recommended Number \ Processor Core = 1000 = 500 [ ] Heavy User Profile Recommended Number \ Processor Core = 750 = 375 [ ] Very Heavy User Profile Recommended Number \ Processor Core = 500 = 250 [ ] Table 3 - User Profiles Factors Checklist Since the recommended number of Average Users is 1,000 per processor core, having an active user population of 500 or less average profile mailboxes would indicate that there are not enough users to fully utilize one mailbox server processor core. Keep in mind that for physical Exchange servers, the maximum number of processor cores efficiently used by the Mailbox server role is eight, http://technet.microsoft.com/en-us/library/aa998874.aspx. Deploying mailboxes on servers with more than eight cores will not provide significant scalability improvements. The use of this table to gauge the number of Mailbox server processors core is a good starting point for reviewing other Exchange server roles, since the Exchange infrastructure planning methodology for Hub Transport and Client Access Servers is in part based on the processor core ratio (e.g., Mailbox:Hub and Mailbox:CAS). For example, the Mailbox:Hub Server ratio is 5:1 if the Hub servers have antivirus software installed on them and 7:1 if the Hub servers do not have antivirus software installed. Therefore, if a user population is not likely to fully utilize one Mailbox server processor core, then it logically is not likely to fully utilize a Hub Transport processor core either. This can indicate that using virtualization for the Hub Transport and/or the Client Access Servers may be warranted. Conclusion With the release of Windows Server 2008 with Hyper-V and more recently, Microsoft Hyper-V Server 2008, you have new options for deploying Exchange Server 2007 SP1. In many situations, keeping Exchange on physical hardware provides better manageability and a lower TCO than using virtualization. However, there are some scenarios in which a virtualized Exchange 2007 SP1 infrastructure may allow you to realize real benefits in terms of space, power, and deployment flexibility. The degree to which your current hardware is underutilized is a key factor in determining whether the benefits of virtualization outweigh the complexities that are introduced by adding a virtual layer beneath Exchange. The benefits of virtualization are usually reserved for environments in which the Exchange deployment is too small to fully tax the underlying servers. Small Exchange deployments, remote sites with poor connectivity, certain disaster recovery sites, and "Mobile LAN" deployments are examples of good candidates for virtualization. Exchange in most organizations is a mission critical application. If you remember this as you design your virtualized environment, and follow the Microsoft support policies and recommendations for virtualized Exchange servers, then you will set yourself up for success. Useful Links Microsoft Support Policies and Recommendations for Exchange Servers in Hardware Virtualization Environments Windows Server Virtualization Validation Program Exchange 2007 System Requirements Exchange Mailbox Server Storage Calculator Exchange 2007 - Monitoring Without System Center Operations Manager Server Virtualization with Advanced Management (SVAM) Service Offering Written by: Doug Fidler, Eric Beauchesne, Robert Gillies and Dino Ciccone Reviewed by: Erin Bookey, Rob Simpson, Matt Gossage, Brent Alinger, Ross Smith IV, Ramon B. Infante & Scott Schnoll6.6KViews0likes9CommentsMicrosoft Virtualization and Licensing Announcements
Today Microsoft announced some significant changes to its licensing and support policies for applications in hardware virtualization environments. There are two key parts of the announcement worth highlighting for Exchange customers: Microsoft now supports Exchange Server 2007 SP1 running Hyper-V or hypervisors validated under the Microsoft Server Virtualization Validation Program (SVVP). Microsoft is waiving its 90-day license reassignment policy to enable customers who virtualize Exchange to move their licenses between servers within a data farm as often as necessary. As part of the updated support policies, we have published an article called Microsoft Support Policies and Recommendations for Exchange Servers in Hardware Virtualization Environments. This article includes Microsoft's support policy and recommendations for running Exchange Server 2003 in a Microsoft Virtual Server 2005 R2 environment. It replaces Microsoft Knowledge Base article 320220, which previously detailed the policy and recommendations for this environment. In addition, this article includes Microsoft's support policy and Microsoft's recommendations for running Exchange Server 2007 SP1 in a hardware virtualization environment. Microsoft Support Policies and Recommendations for Exchange Servers in Hardware Virtualization Environments is a must-read for anyone considering a virtualized Exchange environment. To learn more about the Microsoft-wide changes to licensing and support, see: Microsoft press release - http://www.microsoft.com/Presspass/press/2008/aug08/08-19EasyPathPR.mspx Volume Licensing Brief from WWLP - http://www.microsoft.com/licensing/resources/volbrief.mspx Support policy for Microsoft software running in non-Microsoft hardware virtualization software - http://go.microsoft.com/fwlink/?linkid=3052&kbid=897615 Support partners for non-Microsoft hardware virtualization software - http://go.microsoft.com/fwlink/?linkid=3052&kbid=944987 Microsoft server software and supported virtualization environments - http://go.microsoft.com/fwlink/?linkid=3052&kbid=9570062.4KViews0likes10Comments