Dude, Where's My Single Instance?
Published Feb 22 2010 10:49 AM 51.2K Views

In Exchange Server 2010, there is no more single instance storage (SIS). To help understand why SIS is gone, let's review a brief history of Exchange.

During the development of Exchange 4.0, we had two primary goals in mind, and SIS was borne out of these goals:

  1. Ensure that messages were delivered as fast and as efficient as possible.
  2. Reduce the amount of disk space required to store messages, as disk capacity was premium.

Exchange 4.0 (and, to a certain extent, Exchange 5.0 and Exchange 5.5) was really designed as a departmental solution. Back then, users were typically placed on an Exchange server based on their organization structure (often, the entire company was on the same server).  Since there was only one mailbox database, we maximized our use of SIS for both message delivery (only store the body and attachments once) and space efficiency. The only time we created another copy within the store was when the user modified their individual instance.

For almost 19 years, the internal Exchange database table structure has remained relatively the same:

sis01.jpg

Then came Exchange 2000.  In Exchange 2000, we evolved considerably - we moved to SMTP for server-to-server connectivity, we added storage groups, and we increased the maximum number of databases per server.  The result was a shift away from a departmental usage of Exchange to enterprise usage of Exchange.  Moreover, the move to 20 databases reduced SIS effects on space efficiency, as the likelihood that multiple recipients were on the same database decreased.  Similarly, message delivery was improved by our optimizations in transport, so transport no longer benefited as much from SIS either.

With Exchange 2003, consolidation of servers took off in earnest due to features like Cached Exchange Mode.  Again the move away from departmental usage continued.  Many customers moved away from distributing mailboxes based on their organization structure to randomization of the user population across all databases in the organization.  Once again, the space efficiency effects of SIS were further reduced.

In Exchange 2007, we increased the number of databases you could deploy, which again reduced the space efficiency of SIS. We further optimized transport delivery and completely removed the need for SIS from a transport perspective.  Finally, we made changes to the information store that removed the ability to single instance message bodies (but allowed single instancing of attachments). The result was that SIS no longer provided any real space savings - typically only about 0-20%.

One of our main goals for Exchange 2010 was to provide very large mailboxes at a low cost. Disk capacity is no longer a premium; disk space is very inexpensive and IT shops can take advantage of larger, cheaper disks to reduce their overall cost. In order to leverage those larger capacity disks, you also need to increase mailbox sizes (and remove PSTs and leverage the personal archive and records management capabilities) so that you can ensure that you are designing your storage to be both IO efficient and capacity efficient.

During the development of Exchange 2010, we realized that having a table structure optimized for SIS was holding us back from making the storage innovations that were necessary to achieve our goals. In order to improve the store and ESE, to change our IO profile (from many, small, random IOs to larger, fewer, more sequential IOs), and to resolve our inefficiencies around item count, we had to change the store schema. Specifically, we moved away from a per-database table structure to a per-mailbox table structure:

sis02.jpg

This architecture, along with other changes to the ESE and store engines (lazy view updates, space hints, page size increase, b+ tree defrag, etc.), netted us not only a 70% reduction in IO over Exchange 2007, but also substantially increased our ability to store more items in critical path folders.

As a result of the new architecture and the other changes to the store and ESE, we had to deal with an unintended side effect.  While these changes greatly improved our IO efficiency, they made our space efficiency worse.  In fact, on average they increased the size of the Exchange database by about 20% over Exchange 2007. To overcome this bloating effect, we implemented a targeted compression mechanism (using either 7-bit or XPRESS, which is the Microsoft implementation of the LZ77 algorithm) that specifically compresses message headers and bodies that are either text or HTML-based (attachments are not compressed as typically they exist in their most compressed state already).  The result of this work is that we see database sizes on par with Exchange 2007.

The below graph shows a comparison of database sizes for Exchange 2007 and Exchange 2010 with different types of message data:

sis03.jpg

As you can see, Exchange 2007 databases that contained 100% Rich Text Format (RTF) content was our baseline goal when implementing database compression in Exchange 2010. What we found is that with a mix of messaging data (77% HTML, 15% RTF, 8% Text, with an average message size of 50KB) that our compression algorithms are on par with Exchange 2007 database sizes. In other words, we mitigated most of the bloat caused by the lack of SIS.

Is compression the answer to replacing single instancing all together? The answer to that question is that it really does depend. There are certain scenarios where SIS may be viable:

  • Environments that only send Rich-Text Format messages. The compression algorithms in Exchange 2010 do not compress RTF message blobs because they already exist in their most compressible form.
  • Sending large attachments to many users. For example, sending a large (30 MB+) attachment to 20 users.  Even if there were only 5 recipients out of the 20 on the same database, in Exchange 2003 that meant the 30MB attachment was stored once instead of 5 times on that database. In Exchange 2010, that attachment is stored 5 times (150 MB for that database) and isn't compressed. But depending on your storage architecture, the capacity to handle this should be there. Also, your email retention requirements will help here, by forcing the removal of the data after a certain period of time.
  • Business or organizational archives that are used to maintain immutable copies of messaging data benefit from single instancing because the system only has to keep one copy of the data, which is useful when you need to maintain that data indefinitely for compliance purposes.

If you go back through our guidance over the past 10 years, you will never find a single reference to using SIS around capacity planning.  We might mention it has an impact in terms of the database size, but that's it.  All of our guidance has always dictated designing the storage without SIS in mind.  And for those that are thinking about thin provisioning, SIS isn't a reason to do thin provisioning, nor is SIS a means to calculate your space requirements.  Thin provisioning requires an operational maturity that can react quickly to changes in the messaging environment , as well as, a deep understanding of the how the user population behaves and grows over time to sufficiently allocate the right amount of storage upfront.

In summary, Exchange 2010 changes the messaging landscape.  The architectural changes we have implemented enable the commoditization of email - providing very large mailboxes at a low cost.  Disk capacity is no longer a premium.   Disk space is cheap and IT shops can take advantage of larger, cheaper disks to reduce their overall cost.  With Exchange 2010 you can deploy a highly available system with a degree of storage efficiency without SIS at a fraction of the cost that was required with previous versions of Exchange.

So, there you have it. SIS is gone.

- Ross Smith IV 

36 Comments
Not applicable
I guess if it would have been possible atleast for attachment to be a single copy per database it would have saved lot of unnecessary space consumption.

We have disk resource cheaper that doesn't mean we should not take care of it.

Again increase in database size would happen and would be more difficult and time consuming in recovery scenarios.(I am aware about DAG feature Thanks to this feature which would be easy for recovery)

online maintainance would take more time because of increased database size.
Not applicable
The second diagram doesn't appear to reflect the changes in 2010.  In fact, it looks like the same diagram as the first, only with a cleaner look.
Not applicable
This explaination will help our customers understand the reason the Exchange team removed SIS.  This should've been conveyed long before now.  While this is not typically a show stopper (Archiving is a different story), it helps us partners convey a single answer to mutual customers.
Not applicable
Is there an easy way to determine what the size of the database will be after a transition to 2010 from 200x?

For planning / design it would be nice to know that.

Not applicable
Nice article and I agree with Ross. The discussion with people "missing" SIS functionality tends to be fed by gutfeeling. Nobody can size this properly with so many variables in a living system with a dynamic population. The argument of recovery may be valid, but do not forget that the best practice regarding database sizes versus recovery times isn't related to type nor number of attachments. IMHO, you *may* have benefited from SIS in the past, but you *will* benefit from storage changes in 2010.

I've jotted down my thoughts on SIS here (shameless self-plug)
http://eightwone.wordpress.com/2009/11/25/exchange-2010-database-compression/
Not applicable
One thing I noticed is you mentioned you can leverage cheap disk for your Exchange Mailbox Servers, however from my understanding using local disk or cheap disk typically involves using a HA scenario where you have at least three mailbox servers in a DAG.  This is not always the case for most clients.

I also want to point out that archiving in Exchange 2010 currently goes to the same database where the users mailbox exist.  This is difficult for clients to understand since their goal is to place the production database on prime disk and move archived email to the inexpensive disk.  I have had a number of clients question this logic.  

My question is will there ever be a way to have the Archiving mailbox on a seperate database which could reside on cheap disk and where the production databases could reside on SAN etc.  

Otherwise, thanks for the great explaination.  Many clients love DAG, moving them to three servers in a DAG always seems to be a bit more difficult though.  
Not applicable
So happy to see this explained publically now. Hopefully it'll tamper the flames for those who feel like they've lost something near and dear to them. :) I never understood the fear. If your users had XX megabytes of quota storage yesterday, how does the loss of SiS change things? You should have already been calculating necessary disk space as XX * # of users anyways (plus all the other variables).
Not applicable
Eric - No, there isn't an easy way (the SIS counter is not particularly accurate).  You could do it by:
1.  Grabbing the logical size of the mailboxes via ESM or EMS.
2.  Grabbing the database size via eseutil /ms (see http://support.microsoft.com/kb/192185 for more info, but remember to deal with your page size version E2K3-4KB,E2K7-8KB,E2010-32KB).
3.  Compare these two values with the physical database size, excluding the free space.

Chad-Thanks for the catch. I've corrected it.

v-9twagh - I would take a look at http://technet.microsoft.com/en-us/library/bb125040.aspx with respect to Online Maintenance.  Also remember that in E2010 you only have VSS solution for backup (quick backup mechanism); as far as restore in the event of an outage, obviously the best solution is mailbox resiliency, but if you aren't deploying that, then you should consider a VSS solution that can restore quickly.

Ross
Not applicable
Scott - The work we did in Exchange 2010 with regards to utilizing 7.2K disks does not require mailbox resiliency.  Certain configuration choices, like moving to JBOD, does require mailbox resiliency though.  Take a look at http://technet.microsoft.com/en-us/library/dd346703.aspx for more information on mailbox server storage design planning.

Ross
Not applicable
So as the rest of the world investigates (and, I might add, spends money on) data de-duplication, It looks like the Exchange team is taking the exact opposite approach... does that make sense to anybody outside the team?
Not applicable
Jake, I think it does make sense. MS has moved data reliability, resiliency, and integrity up from the HW (RAID) level to the application level. The app layer gives us a lot more overall protection than the HW layer where the controllers just look at the bits in on the disk and make sure they are "correct" without knowing if the data inside the bits are ok.
Not applicable
Scott, basic RAID 1 (or even 3-disk raid 5) on inexpensive SATA disks is a solid config for ex2010 depending on your users and usage.  Personally I would never consider JBOD, even within a DAG.  It just seems absurd that if you have the money for multiple win2008 enterprise licenses to avoid spending an extra $200-$300 on redundant SATA storage.
Not applicable
Eric, at a customer's site with about 800 mailboxes I compared the sum of mailbox sizes to the database size on disk and found an "overhead" of about 40 percent in mailbox size. This may provide a simple idea of what you can take into account.

After all, as soon as Exchange's built-in archives can be stored in a different database (on separate, cheap storage) we can build semi-HSM mail systems with "onboard" function only, making it cheap to have something better that SIS. Rumors say we can hope for SP1 or SP2 to take us there.
Not applicable
Like Scott, I also hear a lot of complaints about the archiving limits from both customers and colleagues. I hope that SP1,  we'll have the ability to create archive mailboxes in a seperate DB than the user's normal mailbox.
Not applicable
This disappoints me.  We may be an unusual case, but we're storing ~5 TB of messages in ~1.4 TB of disk due to single instance.  If/when we move to E2010, we'll have to buy almost four times more disk!

Compression is nice, but Office 2007+ documents are already zipped, so no win there.
Not applicable
Man, I've been out of the Exchange world for awhile. How ironic... version 8.5 of Lotus Notes/Domino introduced in January of 2009 implemented DAOS (Domino Attachment and Object Service), which *does* keep track of one instance of an attachment across an entire server/cluster. And that includes applications as well, not just mail databases. E.g. a CRM application and Invoicing application that each have a copy of a 5MB attachment that's ALSO in 12 user's mailboxes results in... 5 MB of storage.

IT admins are seeing as much as 77% storage reduction with this feature. It's pretty easy to look like a rock star when you can avoid thousands of dollars of disk upgrades by simply installing a new version of the software... which can be done directly on top of the old version.

Richtext compression was introduced in 2007, and attachment compression back in 2003 I think?

And what is this going to do to backup windows? If upgrading to Exchange 2010 quadruples your backup window due to substantially more data being backed up... ugh.
Not applicable
I get the feeling that the IOPS improvement is MSFTs answer to everything :

1. It allows cheap storage to be used
2. It allows you to host many more mailboxes per server
3. It allows the 'archive' feature (bring your TBs of PST data back into Exchange)
4. Faster Exchange performance, yadda, yadda...

I would have thought that it only allows one of the many things, not all....

If MSFT saved me 70% in IOPS then I could move to cheaper disk, but not move to cheaper disk and triple the size of my data by bring PSTs back in *and* host twice as many users on each MB server...

.. KJ
Not applicable
Good article - I liked the work you did to describe the history of Exchange storage as it pertains to today. Here's a few comments to the crowd!

1. Enabling large mailboxes at a lower cost doesn't make email a commodity. It makes large mailboxes a commodity, but you need to remember that over the past decade you guys have created great functionality and integration across OCS, Sharepoint, plugins with Avaya and Cisco for UM, workflows, great mobile messaging applications, etc. I can tell you that the average enterprise email architecture out there is still pretty complex, and in many cases is becoming more complex and less of a commodity.

2. To Scott Feltmann - The cheaper disk is more around the sizes of disk you can use now that the IOPS requirements are so much better in 2010. Your example of requiring 3 copies of a DAG to use cheaper disk is referring to the JBOD model - which i'm not a personal fan of, although it definitly does work! Really the cheaper disk story in 2010 (from what we've seen) translates into customers being able to use 5GB to 10GB average mailbox sizes with awesome performance results by leveraging huge disks, not hundreds of small disks like it was in 2003, 2007, etc.

When Ross says that Exchange 2010 has "changed the messaging landscape" he's not using marketing spin. This release of Exchange is going to require customers and partners to take a real step back and understand what MSFT has done here. By changing the lowest level of the database structures and standing all our old IOPS rules on their heads, the MSFT team has rewritten the book on some of our old pains. If you think about it most of the old pains Exchange brought us were data related, and specifically around the massive amounts of data we had to deal with. When your customers are frustrated because they want to archive to lower speed disk and keep exchange on the high speed/premium disk, your response should be - "Why put any of your mail data on the high-speed disk?". Its a valid question now that Exchange 2010 has dropped. Its going to require customers and partners to believe in the 10GB avg mailbox. Its also going to require MSFT and partners to be able to properly explain this stuff to the customer - and to have tested it in their labs to make sure they really get it.

Not applicable
@Ken - But you'd still need more *storage space* right? Color me confused. Just because you can replace a $100k array of size X with speed Y with an array of size 5*X but speed 0.3*Y is still a remarkable jump in space required. Either way it's a totally different class of storage that you may not have just sitting around.

Then again, it sounds like SIS became less-and-less efficient over time as extra databases were added (and no resulting cross-database comparison algorithm was implemented) so maybe this is just the next logical step.

Exchange 4/5/5.5 - "Yay, SIS rocks!"

Exchange 2000 - "Here's 20 dbs per server, yeah SIS won't work as well."

Exchange 2003 - "Hey, you're just randomizing your mailboxes right? SIS probably won't work very well then."  <-- This definitely isn't/wasn't the case in places I saw. SIS still had a benefit like @Steve above.

Exchange 2007 - "We added more dbs, yeah SIS will probably work worse. Oh, and we removed SIS for message bodies, so yeah it *definitely* will work worse.

Exchange 2010 - "Nobody's using this anymore since we've nerfed it into oblivion, let's just drop it."

I'll be curious to see how the dust settles.
Not applicable
Even tripling/quadrupling the # of database copies is still going to cost us less to run Exchange 2010. What today requires 30 spindles (6 for DB & 4 for logs all RAID 0+1, 2 CCR nodes and 1 SCR node) for enough I/O with 2007 I can potentially achieve with 4 spindles across 4 DAG nodes using the JBOD model.  I'll gain an extra layer of protection with the 4th DB copy and now my DR site can have bother a live and a lag copy so my RTO won't have log playback of a lag copy as the slower link.
Not applicable
How does this affect the performance of large Query-based Distribution Lists?

Real example here... I have 60,000 Exchange 2003 mailboxes spread across 3 clusters/10 virtual mailbox servers/140 DBs/1 site/1 vLAN. For each database there is a QBDL, and then there is a single QBDL that contains all 130 individual mailbox QBDLs. We use this single list when we need to send emergency and/or weather alerts to all users very quickly (generally 2-4 times per month). Does the need to write the message to all of the mailboxes significantly change the delivery speed of messages when we migrate to Exchange 2010 (in planning at thsi moment)? For reference, the messages are all 2K or less in size and usually come from OWA.
Not applicable
1.  Dump JET.
2.  Go to MS SQL.
3.  Enjoy 1000% IOPS improvement.
4.  Bring back SIS.
Not applicable
Jonathan, don't you think if there was 1000% IOPS improvement to be had with SQL they would have gone with it by now. :)

I sure wouldn't mind 0.001 IOPS per user.
Not applicable
Writing the message body/attachment is about 10% of the IO for the delivery case.  SIS does not save you a significant amount of IO on the mailbox servers for the large DL case.  The majority of the IO is to open the table, get the rules, update the current views/indexes, update the mailbox wide conversation view, etc.

Matt
Not applicable
Linkback from:
http://chrislehr.com/2010/02/exchange-and-loss-of-single-instance.htm

Great post - I have been wanting a better explanation to send to customers.
Not applicable
This is a great post and very good explanation, it did take me a while to digest the whole information but it was worth it.
Not applicable
People, please stop using SAN storage for Exchange (unless you have requirements like blades). You are missing the point for all of the effort the Exchange team has put into database efficiency.  
Not applicable
@Tom, even blades can do faul tolerant iSCSI and DAS now so that excuse is quickly disappearing as well. :)
Not applicable
The real way to get cost effective storage is not buying more servers, licenses, and cheaper disk drives.

It's to go with a solution that work across multiple apps.  It's virtualizing Exchange with Hyper-V or VMware.  

It's allocating DB space with thin provisioning and SATA drives.  When you price out SAN with thin and SATA, it's not so bad.  If you use little 15K FC drives, of course that looks expensive!!

Even if the cost per GB is a bit cheaper with DAS, you end up spending so much more to manage it. and to cool it.   And then, if you decide to try a hosted model, you have tons and tons of DAS sitting around, where if you have a SAN at least you could re-use the space pretty easily.  

Just some common sense.
B
Not applicable
Dropping SIS was a really bad idea.
We're currently running Ex2K3 on 6 databases of ~70GB filesize each. Two of those databases store ~300GB worth of objects, the others not far behind.
Disks may have become cheap but in order to back up all that data to tape (which is mandatory by law for our kind of business in Germany), our backup volume would increase about 4-fold and thus using more media (read: money) and time (endangering SLAs).

Chances for Ex2K10 are dropping by the day...
Mike
Not applicable
Russian version of this post:/Русская версия этой статьи здесь: http://www.maximumexchange.ru/2010/04/07/exchange-2010-sis-single-instance-store
Not applicable
how could that affect the amount of transaction logs generated, in addtion how can we deal with  backup and  restore time window and LUN size
Not applicable
I've been supporting Exchange since 5.5 and have worked at MS for a year during the Exchange 2000 - 2003 transition.  MS was touting SIS as the greatest thing since sliced bread when it was introduced with Exchange 4.0, but now claim that it's no longer viable because of cheap storage.  To me, it seems that the MS programmers who wrote E2k10 took the easy way out by trading in SIS for performance.  The E2k10 software isn't written any better, it just removed SIS to give it better performance.  
The E2k10 code should have been written better to accomodate both SIS and performance.  MS recommended that logs and DB's be on separate disks due to random and sequential R/W operations.  Now that most E2k10 operations are sequential, there is no longer a need to separate the logs from DB's.
Instead of working hard to rewrite E2k10 and innovate new ideas, the MS programmers just rearranged the E2k10 code to increase per server capacity at the expense of SIS.
I was not a fan of E2k7 and resisted E2k7, but I am more resistant to E2k10.  What I would have liked to see was a 64-bit version of E2k3.  Since E2k3 64-bit won't happen, I can virtualize E2k3 on any virtualization platform and support as many users as E2k7 or E2k10 using far less money and without training my staff on E2k7 and E2k10.
No need for PowerShell scripting.  Yeah!!!
Not applicable
Jonathan,

I remember hearing from the Exchange team at last year's US TechEd that they did have a lab version of Exchange up and running with a SQL DB, but they were not at all happy with the performance.  SQL doesn't do well with large amounts of unstructured data and that's basically what email is.
Not applicable
Another reason Microsoft to left SIS was that SIR - Single Instance Repository (aka Single Instance Storage, Data deduplication, Common factoring, Capacity optimized storage) is implemented by major storage vendors: IBM, HP, EMC, NetApp, Hitachi (and many others) in different ways.
If you're a SOHO and cheap 2TB SATA is problem for you, then you are qualified for some cloud solution: gmail, hotmail, freemail etc
bye, bye SIS
Not applicable
I concur with Chris Cho... Gotta go feel a song coming on.
Version history
Last update:
‎May 12 2020 12:46 PM
Updated by: