Robert's Rules of Exchange: Storage Planning and Testing

Published Jan 07 2011 04:00 AM 12.3K Views


Robert's Rules of Exchange is a series of blog posts in which we take a fictitious company, describe their existing Exchange implementation, and then walk you through the design, installation, and configuration of their Exchange 2010 environment. See Robert's Rules of Exchange: Table of Blogs for all posts in this series.

In this post, we want to discuss some of the thought process you should have as you decide what your storage infrastructure is for Exchange 2010. Further, we will discuss how and why you should test your storage. These processes belong early in the planning stages for Exchange 2010, because if you don't test your storage before moving users to that storage, it is a lot more painful to fix than if you could fix something without affecting users.

Why a Whole Blog on Storage?

A Short History of Exchange Storage

Historically, Exchange deployments have seen storage represent a significant portion of deployment costs, both capital expenditures (purchase cost of the storage) and operational expenditures (ongoing maintenance of the storage). In the Exchange 2003 timeframe, to get any sort of "high availability", you needed what we now call "single copy clusters" using shared data, and that typically meant a SAN infrastructure, which is expensive and complex. We would also see companies that would aggressively keep mailbox sizes small because of the cost of disks when requiring RAID 10 or other similar solutions to get the IO performance out of the disk subsystems. Customers would buy as much space as they could afford, and then set mailbox sizes small enough that they wouldn't run out of space on the SAN.

When users would complain about mailbox sizes, customers would go to great lengths to give more space. Thus was born the third party "stubbing" archival solution. These products would take attachments and old emails out of Exchange, replace them with a very small message "stub", and put the email data into a separate system, which had a separate storage infrastructure. So, we're adding the cost of the archival software, the cost of additional storage, the cost of teaching someone to manage that system, and the complexity of another whole system that must be monitored and managed.

Something had to change, and Microsoft heard our customers loud and clear. With Exchange 2007 we reduced IOPS/mailbox (Input/Output Operations per Second - a measure of the disk load generated) by a factor of 70% over an equivalent Exchange 2003 user. With Exchange 2010, we reduced IOPS/mailbox by 70% again over Exchange 2007. That means that Exchange 2010 generates approximately 10% of the IO requests that an equally loaded Exchange 2003 system would.

This opens up a whole new set of possibilities utilizing very large, very slow, very cheap disks such as 7200RPM, 2TB SATA or SAS disks.

Simplicity Is Key

One of the most important things to remember as you are planning your Exchange 2010 architecture (or any other solution, for that matter) is to keep things simple. Any time you add some sort of complexity to your solution, you raise the chance of deployment failure, you introduce the possibility that the capital expenditures or deployment costs will be higher, and you most likely raise the operational costs of the solution as well. The more complexities you add to your system, the higher the chances of failure or increased costs. So, for every single design decision we make, we will drive down our complexity factor as much as possible. For this discussion, we should consider the complexity of our storage infrastructure.

When you design a SAN infrastructure, the key is to provide enterprise storage that is highly available. This need for high availability drives complexity high. Typically, for a SAN-based storage infrastructure, your servers are connected via multiple fibre channel HBA connections to multiple, redundant fibre channel switches. Then these redundant switches are connected via redundant connections to multiple SAN I/O modules. The SAN modules are then connected (via more redundant connections) to "frames" that hold many, many disks. There are all kinds of computers and controllers and software and firmware throughout this storage infrastructure, and they all need to be managed. So if you deploy a SAN and have true high availability requirements (24x7 datacenter requirements), you end up with a staff of people trained to do nothing other than manage your storage infrastructure, including in many cases full-time consultants from the SAN vendor for the 3-5 years that you will be using that storage infrastructure. SANs have some very interesting replication technologies and backup technologies built in, but every single thing you want to add on is at a cost and at an addition of complexity.

If you contrast that with a JBOD solution, using simple servers and simple SCSI connections, where we have provided the redundancy through multiple copies of your data (let's say 2 copies in the local datacenter, and 2 copies in the remote datacenter, which seems to be a quite popular scenario), you have a very different picture. You don't need redundant network or fibre channel connections. You don't need RAID controllers (well, you do need to RAID the OS drives, for instance, but not all those Exchange data drives). You don't need redundant switches for both the MAPI network and the storage area network. You can greatly simplify the solution by having a single MAPI network connection per server, a replication network or two per server, a single connection to a set of disks that are not in a RAID configuration. We then allow Exchange to fail databases or entire servers over, if necessary. If 2 copies in a single datacenter aren't good enough for you, go to 3 copies. Or go to 4 copies. Whatever works for your environment and meets your requirements.

The significant point here is that we don't want any complexity that we can avoid. When having discussions about disks with my customers, I always start at JBOD. Not every customer is going to deploy on JBOD, but we must have the discussion. We must understand the ramifications that the complexity of anything other than a simple JBOD deployment brings to the table. I push hard to not move away from the simplest solution unless there are real requirements to do so. (And I am fond of saying that "because we don't do it that way in our shop" is not a requirement!)

RAID vs JBOD, DAS vs SAN, Oh My!

Most people have low confidence with the idea of not using RAID to store their important email data. But, you should be aware that somewhere in your personal life, you are probably storing your precious data on a JBOD system - either for your email or for some other important data. Google uses JBOD in their Google File System - that's right, your Gmail email account is stored without RAID. Much of Microsoft's Office 365 cloud service utilizes JBOD for storage (the older Exchange 2007 architecture leverages RAID storage infrastructure, but the Exchange 2010 infrastructure in Office 365 is deployed using JBOD). Microsoft's own internal deployment of Exchange 2010 has over 180,000 users on a JBOD infrastructure. Further, Microsoft's TerraServer u ses JBOD to store all of the data presented through that web site, so this isn't just email moving in this technological direction.

Another thing customers "push back" on is the idea that "nearline" SATA/SAS class disks fail more often than enterprise class fibre channel (FC) or SCSI disks, and that a single disk failure in a RAID solution doesn't affect service availability, so it must be a better solution. Therefore, if I have enterprise class disks in a RAID solution, they don't fail as often and when they do the user impact is lessened. To answer that, we want to look at the two claims separately. First, do midline/nearline disks fail more often than enterprise FC or SCSI disks? According to many studies ("Understanding disk failure rates: What does an MTTF of 1,000,000 hours mean to you?", "Disk failures in the real world: What does an MTTF of 1,000,000 hours mean to you?", "Failure Trends in Large Disk Drive Population", and "Empirical Measurements of Disk Failure Rates and Error Rates"), nearline disks do not fail more often than enterprise disks, but rather all disks seem to fail more often than their published MTTF (mean time to failure) numbers.

Note that in the Microsoft's Live@EDU infrastructure, we utilize nerarline 7.2K SATA drives and we see a 5% annual failure rate (AFR), while in MSIT we leverage nearline 7.2K SAS drives and we see a 2.75% AFR there. Microsoft therefore recommends that if you are considering utilization of these nearline drives in a JBOD architecture that you do choose to do so with the 7.2K RPM SAS drives rather than SATA.

What about the impact of disk failures with RAID systems vs non-RAID systems? Do we see less user impact from a RAIDed disk failure vs a JBOD disk failure? With RAID, we have to engineer for a RAID "rebuild" factor. RAID solutions have the capability to have spares available that the system can automatically "swap in", meaning make them live and part of the RAID array. During the period while the RAID array is being rebuilt, there is an impact to the IO capabilities as seen by the operating system and therefore, by Exchange. This just makes sense because the data from the remaining members of the RAID array is being read so that the data on the failed drive can be reconstructed on the newly promoted "spare", so you have an unusually heavy read cycle on the disks that did not fail, and unusually heavy write cycle on the new disk in the array. To properly size an Exchange system, you have to plan for this RAID rebuild process to occur during the heaviest utilization period of the day, so there is a cost associated with this outside the cost of the RAID system itself - you need more disks to provide the appropriate IOPS during the rebuild period. You can see the worst case impact of this on the number of disks required in the RAID Rebuild numbers of the Exchange 2010 Mailbox Server Role Requirements Calculator. Build a RAID solution in the calculator and go to the "Storage Design" tab and change the appropriate RAID rebuild overhead percentages to see what that impact really is. Note that to get the "right" RAID rebuild overhead percentage, you should work with your storage vendor.

With Exchange 2010, we have moved most of the data resilience solution away from RAID (where the redundancy is disk based and not data aware) into the Exchange 2010 high availability architecture (where the application is certainly data aware across all instances of the data). As detailed in the TechNet article New High Availability and Site Resilience Functionality, Exchange 2010 implements the capability to have multiple copies of your email data (up to 16 copies, which is generally a bit excessive - we will typically see 2-6 copies, with both 2 and 6 being extreme cases), and even goes to great lengths to "fix" a given copy of the data if a database page is found to be inconsistent in a database using our single page restore technology. Worst case scenario is that a single copy of your data becomes corrupted to the point that it can no longer be used, and if that was the "live" copy of the database, you have a single database fail to another server. This generally will cause an outage of less than 30 seconds, and since the users are connected to the CAS server now and not the mailbox server, they might not even notice the outage!

You still might say, "But the RAID solution will automatically put a disk into the array and rebuild it for me, and JBOD just won't do that." This is correct. Somehow you have to get a spare disk online and available, formatted at the OS level, and reseed the database that was lost. The storage subsystem in some cases will be able to automatically bring a spare disk online and present it to the appropriate server. And it is a fairly straightforward process to script the mounting and formatting of the LUN, and the reseeding of the Exchange database (from passive, mind you, thus not affecting users while the seed process takes place). It isn't automatic, but it can certainly be automated! And, this would allow you to hire a young intern, get them A+ certified and let them be the person walking the datacenter floor looking for the blinking red lights and replacing disks rather than some US$200/hour consultant.

What Should I Choose?

I have talked to customers that claim that storage is half of the cost of their Exchange deployment costs on a per mailbox basis. Microsoft did a very large amount of work allowing you to make storage decisions to drive those storage costs down very aggressively. In a blog post like this, I can't tell you which storage architecture to choose. Microsoft supports SAN and DAS+RAID along with DAS+JBOD because no single storage architecture is right for every customer out there.

What I can say with confidence is that unless you are in a situation where money is no object and will continue to be no object, you must seriously evaluate the storage solutions available to you. If you just deploy Exchange 2010 on a storage architecture because that's the way you did Exchange 2003, or you choose your storage architecture because the corporate policy states that you must have all data use an enterprise storage solution, then you are doing yourself and your company an injustice.

Larger, cheaper disks means that an organization that moves aggressively to a JBOD (RAID-less) direct attached storage infrastructure utilizing "big, cheap" disks can drive "per mailbox per month" costs down. When online "cloud" email providers such as Google and Microsoft are offering email services in the $5/month/mailbox range, organizations that wish to host their own email should also be looking to lower their costs at least into that same ballpark. Whether you are a company that has shareholders you answer to, or a government agency that has to answer to the taxpayer, driving costs down to better utilize the money you have in your budget while still providing a better solution for your users should be of paramount importance.

With the economy the way it is, we all need to save money everywhere we can, and I certainly would hate to be the guy that made the decision to go with the complex and expensive solution when we could have saved significant storage costs and provided a more simple, more available and more manageable solution.

Size vs. Performance

Sizing storage for Exchange has always been about the tradeoff between size and performance. You use a tool like the Exchange Profile Analyzer to find your user's profiles - how many messages they send/receive per day, how large those messages are, how much data they have in their mailboxes, etc. From this information, we can estimate very closely what your IOPS requirements will be on your disk subsystems. Then you define your requirements for how large you want the mailboxes to be. We take those together and throw them into the Exchange 2010 Mailbox Server Role Requirements Calculator, and that tells us how many disks we need to meet our goals, and what type of RAID or JBOD system we need. We can then balance size vs. performance by changing the spreadsheet inputs (disk size, mailbox size, number of users that have what IO profile, etc.) to see the impacts.

We can also do "what-if scenarios" around small fast disks (using RAID) vs large slow disks (not using RAID). To see this, let's take a copy of the calculator downloaded directly from the web site and make as few changes as possible. The default numbers in the calculator have a 6-node DAG with 3 copies, all in a single datacenter. The default mailbox profile is 24,000 mailboxes, 100 messages, 75k, 2GB mailboxes and 14 day deleted item retention. Let's just play with the disks for now and not change anything else.

First, let's look at smaller, faster disks in a RAID array. I'll turn off "consider JBOD", and set all disks to be 15K RPM FC disks in a 600GB size. This solution (remember, I only changed the disks here) will take 666 disks, and this will be a size constrained solution. In fact, by adding a 2.0 IOPS multiplier for every user, we won't change the number of disks required in this solution. Removing the IOPS multiplier and halving the mailbox size to 1GB reduces the number of disks to 378. (Dropping the mailbox size to 256MB still leaves us needing 246 disks.)

Now, let's take the exact same scenario, but look at the 7.2K 2TB drives utilizing JBOD. Turn "consider JBOD" back on, and change the drives to our 7.2K RPM 2TB drives. This results in 168 drives (2GB mailboxes for 68 disks fewer than the 256MB mailboxes above!). This configuration looks fairly balanced between IOPS and disk space utilization. If I cut the mailbox size in half, the number of disks doesn't change, meaning that I am pretty much IO bound on this configuration. I can add half a GB to each mailbox (2.5GB per mailbox) and it only bumps us to 186 drives in the organization.

Exchange 2010 is still a random database by nature, and because of this, we are still concerned with the IOs generated by the mailbox when we design our storage infrastructure. But in the above example, even though it is possibly quite simplistic and only looks at the disks themselves, you can look at the costs (and I'll allow you to get your own costs for disks from your favorite vendors) like this:

168 disks (7.2K 2TB SATA) * (cost per disk)  = (total disk cost)


666 disks (15K 600GB FC) * (cost per disk)  = (total disk cost)

With most of the customers I work with, they are astounded at the cost difference. With PSTs being a problem (can't get to them from OWA, Windows Phone 7, iPhone or Blackberries, can't store them on a file share in a supported way, can't back up from desktops easily, can't easily search for compliance scenarios, etc), with users screaming for a larger mailbox, with the cost of a third party archive solution to relieve the pressure of mailbox size on the smaller disks. There are just so many reasons that utilizing the larger, slower and less expensive nearline disks is very attractive.

The Archive Mailbox

Exchange 2010 provides a new capability with the Archive Mailbox. This is, at the most basic definition, a second mailbox associated with a given user. These archive mailboxes are only available to "online" users (Outlook 2007, Outlook 2010 users currently connected to the network and OWA 2010 users), and are not available to all legacy Outlook clients (Outlook 2007 support requires the Outlook 2007 December 2010 Cumulative Update or later), offline Outlook users no matter the version or whether they have cached mode configured or not, or any POP, IMAP, Exchange ActiveSync, Blackberry, or most Exchange Web Services clients (support is there in EWS, but clients are limited at this time). Once again, in January 2011, the only clients that work are OWA 2010, Outlook 2007, and Outlook 2010 when it is able to connect to the network where you have Exchange published (Internet for Outlook Anywhere or on the corporate network for RPC/MAPI communications). That's it. This means that there are some limitations. But the capabilities are still quite interesting and attractive.

For instance, what if your users have extremely large mailboxes - say 25 or 40 GB mailboxes - and they need to travel with laptops. OST files to support mailboxes of that size would probably not work well on your typical 5400 or 7200 RPM laptop drive (but will work great on the latest generation solid state drive). It is possible that you could partition their mailboxes to give them a 2 or 5 GB primary mailbox that can be synced to the client with cached mode, and then a 20 or 35 GB archive mailbox for older, less accessed data, which would be available when the user is network connected.

Or, for another very attractive scenario, let's assume that you contract with Microsoft Office 365 to host those 35GB archive mailboxes. You host the primary mailbox, and you have Microsoft Office 365 host the online archive mailbox. That, to me, sounds like a quite interesting possibility for quite a few customers!

One thing to keep in mind is that for almost every function typically associated with the archive mailbox, those functions are also available with a simple primary mailbox. Retention policies (which are policies that delete messages based upon their age), as well as things like "legal hold" or "single item recovery" that some customers seem to associate with the archive mailbox, are all available to you even without an archive mailbox. Very few features are specific to the archive mailbox, other than the actual separation of data, and the ability to move those archive mailboxes to other physical servers, other DAGs, or even into the cloud with Microsoft's Office 365 services.

For Robert's Rules, the use of these archive mailboxes for the majority of our users is out of scope. Our idea of simplicity of deployment dictates that if we don't have a requirement driving us to implement a feature, we should not do so. We will implement some archive mailboxes to show how policies can be used to manipulate data in the separate mailboxes, as well as to show how users would use these mailboxes, and hopefully to show how I can have a hybrid solution with some archive mailboxes stored in the Office 365 cloud, but that will be just for demonstration purposes and will not be part of the primary deployment for our Robert's Rules customer.

Exchange Storage vs. Third Party Archival

This is another discussion I have with a lot of customers, and one where Exchange 2010 has a fantastic "story". As I mentioned above, one thing that customers did to control mailbox size in the Exchange 2003 timeframe was to implement an archival solution that utilized a "stubbing" technology of some sort or other. Based on the size of the message and/or the age of the message, the email payload in Exchange could be extracted from the Exchange database itself and stored in another system, leaving only a "pointer" or "message stub" in Exchang e. This stub could be utilized to retrieve the message payload when the user wanted to open that message.

At Microsoft, our customers have told us of a few limitations of this type of system. One thing I've heard many times is that users don't like the stubbing solutions. It makes data access difficult, and the data access is never the same between two systems (say OWA and Outlook). Quite often the users cannot access their stubbed data from mobile devices, or the implementation for the mobile devices lags behind the main implementation.

Another issue we have with these stubbing or archival systems is that they bring complexity and the associated costs. As we look at our overall messaging service and try to drive down complexity and cost, this is certainly one thing we can look at as a "low hanging fruit". With Exchange 2010, Microsoft has done a lot of work to enable large mailboxes. 5GB, 10GB, even 25GB mailboxes are becoming more common at our customer locations. Utilizing the simple JBOD storage and the large, slow, cheap drives (7200 RPM 2TB drives, for instance), we can implement these large mailboxes cheaper and with a higher availability than we have ever been able to do with a previous version of Exchange.

So, the question becomes this: If you can implement your storage on the cheapest storage infrastructure possible, and you can provide your users with the capability to store everything in their mailboxes for 5 or 10 years, why would you want to add a stubbing solution, raising complexity by adding another software system and storage system to maintain?

Please keep in mind that we aren't talking about a compliance situation here. We are not talking about a situation where we need to keep every message sent through the system for 7 or 10 years, where we need complex case management systems or similar. This would be a journaling system, which Exchange does provide the technologies to interface with. That would be a very different discussion, and one that we aren't going to get into in this blog post (that's for another time).

Robert's Rules doesn't have requirement for any third party archival products - we will be implementing large mailboxes on simple, inexpensive storage for this solution.

Testing With Jetstress

After you have made your storage related decisions, designed your disk subsystem, purchased and implemented it all, you still need to test it. The beauty of that is that we have a great tool called JetStress that utilizes the Exchange ESE database binaries to drive a database on disks in the same manner that Exchange will in your production environment.

What is great is that just recently, a good friend of mine named Neil Johnson released his Jetstress Field Guide document on the Exchange Team Blog. What a fantastic document to help you understand exactly what you should do to test your storage subsystem, and why you need to do the things you need to do. I certainly used this when re-familiarizing myself with Jetstress just before going to my Exchange 2010 MCM rotation. I can't recommend this documentation enough for those of you that will need to run Jetstress! There is nothing I can add here that is not in that doc, so go grab it if you haven't already.


Storage and storage performance is important to Exchange 2010, just as it has been to every other version of Exchange. This is why we start all of our Exchange planning with the mailbox role servers - the storage itself and the processing power and memory necessary to provide access to that storage. It all keys from the mailbox servers.

So, as you are thinking about your storage infrastructure, remember to start simple. Design as simple as you possibly can. Only add complexity when you absolutely must, and only when based on messaging requirements. Try to break the mold of thinking about Exchange 2010 as if it needs to be designed like an Exchange 2003 environment. Try to leverage the new capabilities in Exchange 2010 to provide the functionality that your users need to better do their jobs.

And Some Thanks.

Thanks to Andrew Ehrensing, Ross Smith IV and Matt Gossage for some of the links and storage information above. And as always, thanks to Ross (again) and Bharat Suneja for tech review and formatting/publishing help. I may not say it every time, but gentlemen, it is appreciated every time!

Robert Gillies

Not applicable
Great explanation on JBOD storage that I hope will start seeping into the hearts and minds of the IT community .  Convincing a client that they don't need a SAN seems to be an uphill battle at this point and it's good to see some headway finally being made.  Guess I shouldn't be surprised ... it's been the status quo for a decade.

The only thing Ill comment on is archiving:  I still just don't get it.  With this huge push to cheap storage I just don't get the gain over the added complexity of basically asking a user to manage multiple mailboxes (really, thats all archiving does).  Cloud angle is an interesting one ... but a monthly fee vs a one time purchase of a few banks of 7500RPM disks ... archiving just feels like an "odd feature" that attempts to fill a demand that has grown from these legacy storage ideas that also spawned file stubs.  We don't have 1GB mail limits anymore.  We don't need to buy banks of 4 figure fiber drives anymore.  We just don't need the complexity that Archive introduces when it's perfectly manageable and cost effective to just throw 25gb at a user and be done with it.

But other than that,  fantastic article I will be throwing at lots of clients :)
Not applicable
@Justin.G11: Personal Archives allow you to keep primary mailboxes smaller and move

older, less-frequently accessed email to an archive mailbox which can be accessed transparently using Outlook 2010 (and now) Outlook 2007, or using Outlook Web App (OWA) from any computer using any browser.

At first look, it's easy to dismiss the archive as "just another mailbox", and the idea of dumping everything in a single mailbox indeed seems very attractive. However, when a database grows too large, you partition it. This is what Exchange 2010 does, and

the older, less frequently accessed partition (the archive mailbox) can either reside on the same mailbox database, or another

MDB on the same or another Exchange 2010 SP1 Mailbox server, or in the cloud with

Office 365. This partition, which for most users is likely to be larger than their primary mailbox and mostly consist of

older, less-frequently accessed items, isn't synced locally to the user's computer even with Outlook 2010/2007 in

Cached Exchange Mode.

Whether the archive is stored in a separate (non-Exchange) database, as many products do, or within Exchange as "just another mailbox", the data still resides in a database. With the Exchange 2010 archive, it resides in the Exchange

database. It offers less management overhead, a single management interface (managing archive mailboxes is no different than managing primary mailbox, and it doesn't introduce another type of database to be managed), helps you reduce or eliminate the risks

from PST files, and lowers the costs of e-discovery (using

Multi-Mailbox Search), should your organization need to.

The user isn't managing multiple mailboxes — once provisioned, the archive appears automatically in Outlook and

OWA, and mail can be moved automatically to the archive using

archive/retention policies, or manually by users (drag-n-drop,

cut-and-paste, Inbox rules). For users, the archive offers the benefits of PST files, minus the file management and storage overhead of PSTs.

Of course, (leaving all mail in) a single, larger primary mailbox can also be a perfectly acceptable solution if it meets your organization's requirements.

Not applicable
@Justin.G11 - Just to add to what Bharat said, I somewhat agree with what you say.  Most of my customers don't need the archive mailbox.  Most of them can get by with just throwing large mailboxes at their users.  In fact, this is what Microsoft is implementing internally.  Only people in our "dogfood" environment have archive mailboxes.

BUT, the archive mailbox does have some interesting usage scenarios.  I start talking to customers about not having the archive mailbox.  As I said, I always start with the more simple scenario, and the archive mailbox is an added complexity.  Only when the customer requirements lead us to adding the complexity of the archive mailbox do we add that capability to the discussions.

It is quite possible that your customers will never bring you a scenario where you need to consider the archive mailbox.  ;)  I work with some customers that regularly have mailboxes well over 25GB in size, and when those users are mobile (think laptops), the archive mailbox really starts to make sense.
Not applicable
Hi guys,

It looks like the Outlook 2007 update to work with ex2010 personal archives is hosing systems and has been pulled by Microsoft.  Any idea when the replacement update is coming?
Not applicable
The update has not been pulled at this tims. As with any KB article, you should only follow its steps if you are in fact affected by the symptoms.
Not applicable
Cracking article, just what we need as we are about to make the SAN decision or not for a 2007 to 2010 upgrade. For an SME, the SAN is expensive and it doesn't just stop at Exchange, but handles the entire organisation data including document files as well as db files.

So, my question is, how do we go about solving the file server misery of pruning back and moving data around to free up [current] DAS disk space in the same way as you have described here? We've tried using DFS to point to various file shares hosted on different servers, but its not a management solution for disk space. Disk quotas are the same as users not only have their own user documents but also departmental ones scattered in different places. I see that quotas have improved to be folder based now, but still see the need for a SAN for continual storage growth.

Finally, JBOD seems an excellent solution [we use cheap Drobo storage here currently], but it still needs backing up, so the stubbing systems (Enterprise Vault for example) win here with a managed solution designed to help not only the management of files but also the reduction in backup time/cost.

If I should post this elsewhere, please advise.

Thanks for you time though.
Not applicable
@Neal - good comments, thanks for that.  And this is as good a forum as any for posting questions directly related to the post that generates those questions.  I can't provide consulting services or be a help desk in the comments, but I can certainly discuss the topics of the post!

Keep in mind here that while I am a proponent of DAS and JBOD for Exchange data storage (when you have enough copies in your DAG to be within the support boundaries), I am in no way advocating that this means the death of the SAN.  SANs will still be useful in many situations - file shares, database storage for SQL Server or Oracle, etc.  My discussion above is strictly centered around Exchange database storage in a DAG with multiple copies.

Backups are another discussion, and one that WILL drive your storage discussion as well.  Some of our partners in the storage provider space do have interesting backup solutions.  I have customers on Exchange 2010 using NetApp SANs and using their backup solution, which does provide some nice capabilities.  I also have customers that are looking seriously at Exchange Native Data Protection - no "traditional" tape-based backups.  You have to drill into the requirements of the customer and look at what you need to do for "backup and restore" capabilities.  DPM and other software VSS solutions (we have partners in thsi space as well) provide some great capabilities for JBOD-based Exchange data.

Bottom line is that just like with any space where there are multiple vendors, none of these solutions is right for every customer.  It all comes back to the requirements.  Always start as simple as possible, and add the complexities that are necessary to meet the requirements at hand.
Not applicable
Hi Robert, I love your article!  I have been arguing with clients against purchasing SAN's just because they need exchange storage for a long time.  It’s hard enough just to convince businesses to buy TIER2 rather than cheap servers with JBOD setups.  It’s good to have another article to point them at to help me win my battles.

See you at the global summit?
Not applicable
@Clint - I don't think I'll make the Global Summit this year.  I have too much other stuff going on that conflicts.  Maybe next time!
Not applicable
Could you comment on Exchange 2010 with JBOD when it is running as a guest in a virtual host ?    My current understanding from my Windows server crew is that we need the SAN in that environment.   Thanks
Not applicable
Great article, a lot text to type but its worth it!

keep it going on...and get it on the deep!


Not applicable
You don't need a SAN in a virtual envrionment. If you want to use High Availability features of a virtual solution (migrate a guest from host to host) you'll need a SAN.
But Exchange doesn't support Exchange HA (Dag's, etc) with virtulization HA (migrating, etc).
So it is possible to run Hyper-V (or any other virtulization) without using SANs.
Not applicable
@Mike1816 and @jader3rd - whether to go with SAN or DAS/JBOD is certainly a great question when you are thinking about virtualization.  As a matter of fact, the decision making processes around virtualization is the subject of my next blog article!  It is in the works now after a brief holiday vacation.  :)

And @ITLee - thanks for the kind words!
Not applicable
What about backup differences when using DAS as opposed to SAN? What about Snapshotting with DAS is that possible? oh and also don't forget hot growth/add of disk volumes when end users fill up disk....
Not applicable
@Daniel Eason - Good questions, thanks!  What you cannot do with direct attached is "hardware VSS".  Companies like EMC and NetAPP have some interesting hardware VSS solutions for backing up Exchange data.  With a DAS solution, you have to look at "software VSS" solutions like Microsoft's DPM or others (IBM's Tivoli has a software VSS that some of my customers use, and I believe there are others out there).  This is certainly something to consider when you design your Exchange infrastructure.

As for hot growth or dynamically adding disks or volumes to your Exchange systems, that is certainly something that SANs allow that DAS makes more difficult.  The question there still comes down to a few things.  

First, if you engineer DAS/JBOD for growth, looking at a vision of how much data the users will store over time, you can predict how much data space you will need.  By doing this, you can purchase enough JBOD storage to house the space needed by your users over the 3 or 5 year life of the hardware systems.

Second, you can do something called oversubscription of your disks.  This is where you actually purchase less disks than you provision in your mailbox quotas.  If you give 1000 users 10GB mailbox limits, you need 10,000GB of space, right?  But if you only purchase 5,000GB of disk, that is oversubscibing your disks.  Then later, if you run out of space, you can then purchase more storage.  SANs allow for this.

BUT...  You must take into account that the users will generate the same number of IOPS (basically) with a 5GB mailbox as they would with a 10GB mailbox.  So if you purchase enough spindles to give them space for 5GB, you will then later have to purchase the same performance level of spindles so that you can extend the disks/volumes available to Exchange while giving the same IO performance.

Microsoft's recommendation is to purchase the hardware (servers, processors, memory, disks, everything) such that you are making full use of the hardware as deployed, planning for worst case in your growth scenarios and worst case in your failures (both).  Then, if you need to grow your system, you plan for adding more servers to the DAGs with disks on those servers.

This is the harder part of planning Exchange.  Most of us don't have a "crystal ball" and training at Hogwarts such that we know how to tell the future.  But, quite frankly, most of my customers can project growth, and be fairly accurate.  My customers typically don't dynamically grow their Exchange storage over the years.  They set limits and stick with them.  Of course, YMMV.  ;*)
Not applicable
Thanks for this excellent article, one angle I would like your view on; you haven't discussed SAN/iSCSI. From what I have understood, you can use cheep disk and JBOD also in a SAN/iSCSI configuration. Is this an option in a large Exchange 2010 installation (+70.000 mailboxes and growing). If soo I assume that we would get about the same storage cost as in DAS/JBOD
Not applicable
@Nils - SAN with iSCSI is still quite complex.  You have to test with Jetstress such that you completely load your iSCSI network and make sure that you have the right bandwidth and that the disk subsystem can support the load from Exchange.  

PLUS, typically these scenarios introduce a shared storage infrastructure where the actual SAN that the iSCSI connects the servers to is also serving storage for a large database like SQL or similar.  How do you test against that?  How do you model the IOPS of that SQL Server database?  This is not trivial.

Bottom line is that this is supported.  I know of customers using this (at the scale you are talking about).  But, it brings complexity, and I am against bringing the complexity if you can avoid it.  I have nothing against iSCSI per se, as long as the cost is reasonable and the performance tests show that you will get the performance required out of the storage.  But I am against the complexity that it brings, at least where I can avoid it.
Not applicable
Thanks for the article.!
excelent explanation!!
Thx again.!
Not applicable
Hi could you mention single instance storage and how it would impact administrators moving from exchange 2003 to 2010?
Not applicable
@Chrisc.Rachele - Thanks!

@Anand - Without going into much detail, the changes in the database schema that allow Exchange to support up to 100,000 items in a folder (as opposed to 5000/folder in E2k3) also caused us to lose Single Instance Store or SIS.  This was certainly a concern of the Exchange team.   But, one other thing that was implemented was compression within the database.  Things like the headers  of messages and the payload of HTML messages are compressed inside of the Exchange 2010 store.  Interestingly enough, at Microsoft with real data (my data, for instance), it was seen that the compression almost perfectly cancelled out the loss of SIS.

Another thing to keep in mind:  As our customers moved from Exchange 2000 to Exchange 2003 to Exchange 2007 to Exchange 2010, the number of users in a single database continued to reduce, even though the sizes of the databases went up.  Because you have less of your users in a single database, you have less statistical chance that any given message would be delivered twice (or more often) to a given store.  This means that as you put less and less users in a single database, you gain less from SIS anyway.  Sometimes you'll hear the Exchange product group guys talk about how SIS was "broken anyway", and this is what they mean.  Our SIS in Exchange 2007 and previous did not work across databases, so our customers got less and less advantage over the years.

So, the bottom line is that while you lost SIS, compression makes the databases roughly the same size for almost all of our customers, so when looking at the size of the databases on disk it is basically a wash.  (Which is an American, possibly "Southern", saying meaning that there is no gain or loss from this.)  BUT, you gain in a big way when you look at how many messages you can have in a single folder (and some other performance increases not discussed here).  Because there was no significant loss, and a huge upside, it was decided that this was a good decision and that is how the Product Group moved forward.

Anything else I can help with around SIS?
Version history
Last update:
‎Jul 01 2019 03:56 PM
Updated by: