Exchange 2007
385 TopicsIntroducing: Log Parser Studio
To download the Log Parser Studio, please see the attachment on this blog post. Anyone who regularly uses Log Parser 2.2 knows just how useful and powerful it can be for obtaining valuable information from IIS (Internet Information Server) and other logs. In addition, adding the power of SQL allows explicit searching of gigabytes of logs returning only the data that is needed while filtering out the noise. The only thing missing is a great graphical user interface (GUI) to function as a front-end to Log Parser and a ‘Query Library’ in order to manage all those great queries and scripts that one builds up over time. Log Parser Studio was created to fulfill this need; by allowing those who use Log Parser 2.2 (and even those who don’t due to lack of an interface) to work faster and more efficiently to get to the data they need with less “fiddling” with scripts and folders full of queries. With Log Parser Studio (LPS for short) we can house all of our queries in a central location. We can edit and create new queries in the ‘Query Editor’ and save them for later. We can search for queries using free text search as well as export and import both libraries and queries in different formats allowing for easy collaboration as well as storing multiple types of separate libraries for different protocols. Processing Logs for Exchange Protocols We all know this very well: processing logs for different Exchange protocols is a time consuming task. In the absence of special purpose tools, it becomes a tedious task for an Exchange Administrator to sift thru those logs and process them using Log Parser (or some other tool), if output format is important. You also need expertise in writing those SQL queries. You can also use special purpose scripts that one can find on the web and then analyze the output to make some sense of out of those lengthy logs. Log Parser Studio is mainly designed for quick and easy processing of different logs for Exchange protocols. Once you launch it, you’ll notice tabs for different Exchange protocols, i.e. Microsoft Exchange ActiveSync (MAS), Exchange Web Services (EWS), Outlook Web App (OWA/HTTP) and others. Under those tabs there are tens of SQL queries written for specific purposes (description and other particulars of a query are also available in the main UI), which can be run by just one click! Let’s get into the specifics of some of the cool features of Log Parser Studio … Query Library and Management Upon launching LPS, the first thing you will see is the Query Library preloaded with queries. This is where we manage all of our queries. The library is always available by clicking on the Library tab. You can load a query for review or execution using several methods. The easiest method is to simply select the query in the list and double-click it. Upon doing so the query will auto-open in its own Query tab. The Query Library is home base for queries. All queries maintained by LPS are stored in this library. There are easy controls to quickly locate desired queries & mark them as favorites for quick access later. Library Recovery The initial library that ships with LPS is embedded in the application and created upon install. If you ever delete, corrupt or lose the library you can easily reset back to the original by using the recover library feature (Options | Recover Library). When recovering the library all existing queries will be deleted. If you have custom/modified queries that you do not want to lose, you should export those first, then after recovering the default set of queries, you can merge them back into LPS. Import/Export Depending on your need, the entire library or subsets of the library can be imported and exported either as the default LPS XML format or as SQL queries. For example, if you have a folder full of Log Parser SQL queries, you can import some or all of them into LPS’s library. Usually, the only thing you will need to do after the import is make a few adjustments. All LPS needs is the base SQL query and to swap out the filename references with ‘[LOGFILEPATH]’ and/or ‘[OUTFILEPATH]’ as discussed in detail in the PDF manual included with the tool (you can access it via LPS | Help | Documentation). Queries Remember that a well-written structured query makes all the difference between a successful query that returns the concise information you need vs. a subpar query which taxes your system, returns much more information than you actually need and in some cases crashes the application. The art of creating great SQL/Log Parser queries is outside the scope of this post, however all of the queries included with LPS have been written to achieve the most concise results while returning the fewest records. Knowing what you want and how to get it with the least number of rows returned is the key! Batch Jobs and Multithreading You’ll find that LPS in combination with Log Parser 2.2 is a very powerful tool. However, if all you could do was run a single query at a time and wait for the results, you probably wouldn’t be making near as much progress as you could be. In lieu of this LPS contains both batch jobs and multithreaded queries. A batch job is simply a collection of predefined queries that can all be executed with the press of a single button. From within the Batch Manager you can remove any single or all queries as well as execute them. You can also execute them by clicking the Run Multiple Queries button or the Execute button in the Batch Manager. Upon execution, LPS will prepare and execute each query in the batch. By default LPS will send ALL queries to Log Parser 2.2 as soon as each is prepared. This is where multithreading works in our favor. For example, if we have 50 queries setup as a batch job and execute the job, we’ll have 50 threads in the background all working with Log Parser simultaneously leaving the user free to work with other queries. As each job finishes the results are passed back to the grid or the CSV output based on the query type. Even in this scenario you can continue to work with other queries, search, modify and execute. As each query completes its thread is retired and its resources freed. These threads are managed very efficiently in the background so there should be no issue running multiple queries at once. Now what if we did want the queries in the batch to run concurrently for performance or other reasons? This functionality is already built-into LPS’s options. Just make the change in LPS | Options | Preferences by checking the ‘Process Batch Queries in Sequence’ checkbox. When checked, the first query in the batch is executed and the next query will not begin until the first one is complete. This process will continue until the last query in the batch has been executed. Automation In conjunction with batch jobs, automation allows unattended scheduled automation of batch jobs. For example we can create a scheduled task that will automatically run a chosen batch job which also operates on a separate set of custom folders. This process requires two components, a folder list file (.FLD) and a batch list file (.XML). We create these ahead of time from within LPS. For more details on how to do that, please refer to the manual. Charts Many queries that return data to the Result Grid can be charted using the built-in charting feature. The basic requirements for charts are the same as Log Parser 2.2, i.e. The first column in the grid may be any data type (string, number etc.) The second column must be some type of number (Integer, Double, Decimal), Strings are not allowed Keep the above requirements in mind when creating your own queries so that you will consciously write the query to include a number for column two. To generate a chart click the chart button after a query has completed. For #2 above, even if you forgot to do so, you can drag any numbered column and drop it in the second column after the fact. This way if you have multiple numbered columns, you can simply drag the one that you’re interested in, into second column and generate different charts from the same data. Again, for more details on charting feature, please refer to the manual. Keyboard Shortcuts/Commands There are multiple keyboard shortcuts built-in to LPS. You can view the list anytime while using LPS by clicking LPS | Help | Keyboard Shortcuts. The currently included shortcuts are as follows: Shortcut What it does CTRL+N Start a new query. CTRL+S Save active query in library or query tab depending on which has focus. CTRL+Q Open library window. CTRL+B Add selected query in library to batch. ALT+B Open Batch Manager. CTRL+B Add the selected queries to batch. CTRL+D Duplicates the current active query to a new tab. CTRL+ALT+E Open the error log if one exists. CTRL+E Export current selected query results to CSV. ALT+F Add selected query in library to the favorites list. CTRL+ALT+L Open the raw Library in the first available text editor. CTRL+F5 Reload the Library from disk. F5 Execute active query. F2 Edit name/description of currently selected query in the Library. F3 Display the list of IIS fields. Supported Input and Output types Log Parser 2.2 has the ability to query multiple types of logs. Since LPS is a work in progress, only the most used types are currently available. Additional input and output types will be added when possible in upcoming versions or updates. Supported Input Types Full support for W3SVC/IIS, CSV, HTTP Error and basic support for all built-in Log Parser 2.2 input formats. In addition, some custom written LPS formats such as Microsoft Exchange specific formats that are not available with the default Log Parser 2.2 install. Supported Output Types CSV and TXT are the currently supported output file types. Log Parser Studio - Quick Start Guide Want to skip all the details & just run some queries right now? Start here … The very first thing Log Parser Studio needs to know is where the log files are, and the default location that you would like any queries that export their results as CSV files to be saved. 1. Setup your default CSV output path: a. Go to LPS | Options | Preferences | Default Output Path. b. Browse to and select the folder you would like to use for exported results. c. Click Apply. d. Any queries that export CSV files will now be saved in this folder. NOTE: If you forget to set this path before you start the CSV files will be saved in %AppData%\Microsoft\Log Parser Studio by default but it is recommended that y ou move this to another location. 2. Tell LPS where the log files are by opening the Log File Manager. If you try to run a query before completing this step LPS will prompt and ask you to set the log path. Upon clicking OK on that prompt, you are presented with the Log File Manager. Click Add Folder to add a folder or Add File to add a single or multiple files. When adding a folder you still must select at least one file so LPS will know which type of log we are working with. When doing so, LPS will automatically turn this into a wildcard (*.xxx) Indicating that all matching logs in the folder will be searched. You can easily tell which folder or files are currently being searched by examining the status bar at the bottom-right of Log Parser Studio. To see the full path, roll your mouse over the status bar. NOTE: LPS and Log Parser handle multiple types of logs and objects that can be queried. It is important to remember that the type of log you are querying must match the query you are performing. In other words, when running a query that expects IIS logs, only IIS logs should be selected in the File Manager. Failure to do this (it’s easy to forget) will result errors or unexpected behavior will be returned when running the query. 3. Choose a query from the library and run it: a. Click the Library tab if it isn’t already selected. b. Choose a query in the list and double-click it. This will open the query in its own tab. c. Click the Run Single Query button to execute the query The query execution will begin in the background. Once the query has completed there are two possible outputs targets; the result grid in the top half of the query tab or a CSV file. Some queries return to the grid while other more memory intensive queries are saved to CSV. As a general rule queries that may return very large result sets are probably best served going to a CSV file for further processing in Excel. Once you have the results there are many features for working with those results. For more details, please refer to the manual. Have fun with Log Parser Studio! & always remember – There’s a query for that! Kary Wall Escalation Engineer Microsoft Exchange Support421KViews8likes37Comments/Mixed-ing it up: Multipart/Mixed Messages and You
Greetings, Exchange Administrators! In today’s edition of “and You”, we’ll be covering Exchange’s handing of messages generated by iPhones, iPads, and Macintosh Mail clients. Specifically, we’re going to cover what /mixed content body messages are, how Exchange handles them now, and how we handle them down the line. Before we can dive into the content, you first have to understand how internet messages are structured, and that means learning a little bit about how Exchange stores messages, and quite a bit about MIME . Not the mute freaks – Multipurpose Internet Mail Extensions, also known as “The Mime not everyone hates.” Exchange, Messages, and LOL Cats Exchange stores messages as a series of properties, where each property has a name and a value. For instance, PR_SUBJECT is the subject property, and “Test Message” is the value. Messages in Exchange have one body with multiple forms of representing it (HTML, Rich Text and Plain text). The HTML view of the body looks like so: This is a cat. Do Not Want The RTF (rich text format) view of the same body is also capable of containing both formatting and images, and so would look like: This is a cat. <Mentally insert picture of cat here – it’s only eight lines up, so honestly, you can do it. > Do Not Want The plain text version of the body is composed of plain text, a fact that should be obvious based on its name. A good way of simulating plain text body generation is to paste your content into notepad. If it survives the paste into notepad, it will be part of the plain text. Sadly, the cat picture does not: This is a cat. Do Not Want Messages in Exchange can have multiple attachments. This is good, because even if Exchange is forced to generate a plain text version of the body, the cat picture can come along so that should you decide to click it, you can view a picture of a cat which does not want something. Key points for non technical and allergic to cats people: For Exchange, messages can have one body and can have many attachments. And in this corner, MIME MIME is a plain text format for email messages. MIME messages are divided into “parts”, each of which might have content or even child parts, like a series of Russian dolls. Each MIME part (even the root part, or the message itself) has a header called Content-Type, which describes the type of content in the part. Content type is divided into major parts and minor parts, separated by a slash. For example, consider the content-types multipart/alternative or multipart/mixed. Every part has a type, even going so far as to define a Miranda type for parts which can't afford to assign one (text/plain). Sometimes the types are quite helpful at understanding the meaning of the content, sometimes not so much. For instance, Content-Type: Application/PDF – that one means that it is Adobe’s Portable Document Format. On the other hand, Content-Type: application/octet means “I can’t tell what this is. Here’s a binary blob for your troubles. Hopefully you can figure out what to do with it” Multipart/ is a general type, meaning that this MIME part may contain many child MIME parts. The sub type of the part (the part after the slash) tells us more about the child parts, and in this case, how they are related to each other. Now we will take a closer look at some of the multipart sub types to see where things can go wrong. Relativity First off we’re going to look at Multipart/related, (also called a “related” body part). Related, in this case, means that the sub MIME parts are actually related to each other – in other words, that give the following MIME structure: 1. Multipart/related 1.1. Text/HTML 1.2. Image/Gif That 1.1 and 1.2 are not meant to be interpreted as “separate” parts – they have meaning as one. In this case the html contains image links to the 1.2 image (our friend the cat). Key point for non technical and allergic to cats people: Multipart/Related means “We belong together.” Alternatives Multipart/alternative means that each child of this part is a different representation of the same data. They are “Alternative” versions of each other. The intention is that a client picks the type that it can best display and displays that one. So given this mime structure: 1. Multipart/alternative 1.1.1. Text/Plain 1.1.2. Text/Html 1.1.3. Application/Pdf The client doesn’t have to show the text/plain part as the body. No, if the client knows how to display a text/html body, it is free to do that. So multipart/alternative is a way of grouping a number of different formats of the same data together and letting the client decide which one it shows best – it’s like kid’s beauty pageant, except that instead of the ladies from the rotary club, you have the email client as the judge. Key point for non technical and allergic to cats people: Multipart/Alternative means “Pick the one you like best.” Mixed Up Multipart/Mixed, according to RFC 1521, means that the parts are completely independent of each other (not related to each other) but that their order matters. What is the expected behavior? “Clients usually display the parts one after the other.” This, however, brings into play another parameter on the MIME part – Content-Disposition. This parameter has a couple of normal values – Inline and Attachment. Attachment is easy to understand – in the context of Exchange, it means “Show me in the well, that I may be blocked by Outlook from being saved or opened.” Inline, on the other hand, we handle differently. Remember that whole “messages have one body, and maybe many attachments” thing? Keep that in mind while we look at how our Cat message looks like with a /mixed body: 1. Multipart/Mixed 1.1. Text/Html - Inline 1.2. Application/Gif - Inline 1.3. Text/Html - Inline And the intention among clients that generate this is that the receiving client should display the text/html part first and then glob on the image to the end of it, and then the rest of the body. There’s no limit to multipart/madness, you can combine them (and dispositions) into nigh endless combinations. For example: 2. Multipart/Mixed 2.1. Text/Html -Inline 2.2. Image/Gif -Inline 2.3. Text/Html -Inline 2.4. Text/Plain -Inline Means “Show 2.1, followed by the image from 2.1 then the html from 2.3 and then the text from 2.4. Do it NOW.” 3. Multipart/Mixed 3.1. Text/HTML -Inline 3.2. Image/Gif –Attachment 3.3. Text/Html –Inline 3.4. Text/Plain –Attachment Means “Show the text from 2.1, NOT the attachment from 3.2 unless someone does something, the text from 3.3, but NOT the text in 3.4 (unless they do something like click an attachment in the well). The problem, of course, comes from Exchange’s original definition of a message – one body (with multiple representations), maybe many attachments. Key point for non technical and allergic to cats people: Multipart/Mixed can mean “Maybe show all of these, in the order listed.” Combo #5 MIME Types are not exclusive. I can combine Multipart/Mixed, Multipart/Alternative and Multipart/Related into a single message, and actually have a meaning 1. Multipart/Mixed 1.1. Multipart/Alternative 1.1.1.1. Text/Plain 1.1.1.2. Multipart/Related 1.1.1.2.1.1. Text/HTML 1.1.1.2.1.2. Image/Gif - inline 1.2. Image/Jpeg - attachment Yes, this structure is legal. And it is meaningful. To understand this you unwrap in order, one level at a time. Multi Mixed – this message is different parts, put together, and the order matters. Multipart/Alternative- I have two children, pick the prettier one and show it off. Text/Plain – I am a blob of plain text Multipart/Related – My children are bits and pieces of each other Text/HTML – Pretty, Pretty Text. Image/Gif – I am a picture of a cat, referenced by pretty, pretty text, I hope. Image/Jpeg – I’m an attachment (in case you couldn’t see the picture of the cat above). Exchange has always dealt well with multipart/alternative bodies, picking the one which we can best support and promoting it. We deal well with multipart/related as well – not every attachment on a message is visible – attachments have a disposition, which is either not set, inline or attachment. Setting neither indeterminate – the client does what the client does (and good luck establishing an algorithm that works for everyone). On the other hand, messages with /mixed content bodies where multiple parts are inline, those do not work so well. Blender’d Messages In the case of /mixed bodies, there are multiple MIME parts which are meant to combine together like Japanese robots to form Voltron, or an image of a cat and some text. Today, if you receive such a message, we do the best we can with it (which is pretty dis-satisfying): We pick the first “body type part” – aka, a text/<something> part, and that one becomes the body as seen by Outlook. All of the rest of them, those are attachments and we shove them into the attachment well. Oh, sure they might have a disposition of inline, but because the most common usage of inline is in HTML, we actually check, and anything that isn’t referenced by a link from the body, we won’t be fooled by. Into the well it goes. From an Outlook perspective you get the first part of the message, then two attachments, one of which is the picture and the other is the trailing text. You are welcome to open the attachments in order, and combine them in your head to form a message, but once you get more than a couple parts it isn’t reasonable. Exchange has never supported “proper” display of /mixed body messages in OWA or Outlook, until now. Blended Messages Starting with Exchange 2007 service pack 3 roll up 3 (E12SP3RU3) and Exchange 2010 service pack 1 rollup 4 (E14SP1RU4) are a set of changes to how Exchange handles /mixed body messages. We think that in general your users will enjoy the less mangled nature of these messages, but if I’ve learned anything from fourteen years of working on Exchange, it’s that “different == angry”. People get used to our behavior, so even when it’s wrong (or incomplete) they expect the same behavior, and come to rely on it. Consider this is your fair warning that this change is coming, some details on how it works, where it works, and where it doesn’t. We're adding support for Exchange to combine multiple body parts into a single, aggregate body. The short of this is that broken up messages should show combined together, and readable in OWA and Outlook. There are, however, limitations to this. First off, right now this will only work for message generated by Apple iPods, iPads, or Apple Mail clients. This isn’t an accident – we developed the rules used to combine these bodies using test data from our counterparts at Apple, and while we handle messages by them well, the internet is wide and wondrous, and anyone can write messages with multipart/mixed bodies. For now this is restricted to clients we have good test data on, good rules for and a good way to identify. To create the aggregate body, we check each MIME part in the /mixed body. If a MIME part has a disposition of Attachment, it goes to the attachment well. I am not going to argue with a client that specifies that it is an attachment. If a body part has a disposition of inline or not set, if it is a plain text or html body part, we add it to the aggregate body. If the body part is an image which can be displayed inline, we add a link to it in the aggregate body. If the body part is not text, or is an image we can't display inline, it goes to the well. How do I know if this breaks me? If you're the owner of an application which is used to sending in /mixed content messages with MIME parts that you rely on Exchange treating as well attachments, and you send them from an Apple platform, and you haven’t been setting a content disposition, and the parts are text or image types (and you are setting content type), then you need to add a Content-Disposition to the MIME parts you want to be attachments, and set them to Attachment. If you're a normal consumer wondering why messages with images or signatures got split into pieces, you don’t need to do anything. Conclusions Today we’ve covered different MIME structures, different representations of bodies, dispositions, types, and a host of other things, but the hard boiled summary is that mail between Apple clients and Exchange clients will be handled better. The best case scenario isn’t that your users call up and say “Wow, I’m really impressed that the messages I got from John on his Mac aren’t chopped up into a dozen pieces anymore.” The best case scenario is that things just work. No one has to call anyone, because you neither notice nor care what platform the message came from, or what format it was sent in. You can see your email, your signatures, and yes, your LOL cats. Enjoy! Epilogue: How things went wrong Already I’ve read (and responded to) reports on the Exchange Update Forums that this new code introduced a new problem. PDF attachments from Mac clients which declare their disposition to be Inline aren't visible in the attachment well. Visible in Outlook Web Access, yes, but MAPI, no. So how on earth did this happen? And how did we miss this bug? (“Do you do any quality control?” as one person asked.) The answer to this is yes, we do. But rather than just say that, let’s look at the journey we took to get this fix included in a rollup, what the problem is, how we missed it, and what we're doing about it. This fix begins with a conference call between me, a few team mates, and our counterparts at Apple, in which someone asked “Well, can you guys do anything to actually support this style of message body?” I spent the next few days researching what it would take to build a synthetic body out of the parts and eventually concluded that it would be possible. With that we began the discussions of “Why now?” and “What is it going to take?” That phase took quite a while. The core problem was that we were well aware that producing a new type of body was going to require a large investment in testing. Eventually we concluded that we could and would do it. And we did. We wrote the code that supports this type of body, we wrote the code that tests it. We have a huge number of tests that test various types and formats of bodies. We had only the new ones to test the new body. Line for line there's more code to test this than there's to support it. So the fix was done, the code was tested, but we still weren’t ready to check in the code. Instead, we took a much more cautious approach. The fix was available as an interim update and that interim update was tightly controlled since we wanted it to go to customers who would be putting it into daily use. After a month or so we loosened it up and the fix went out to four more customers, eventually being in play at around 12 customers for about three months. After three months of field deployment with positive results, we decided to schedule it for a rollup (which should've been RU2, but wound up being RU3). But there’s a problem. You see, Exchange 2007 and 2010 don’t pay much attention to Content-Disposition, because for years inline has been a great way to make sure the attachment is essentially invisible (an inline attachment has its hidden property set to true, since it is displayed inline with body content). And the test code missed this case – it’s an inline attachment which can't be displayed inline by Exchange, but in a message which contains multiple body parts which can be merged to build the synthetic body. Unfortunately neither our in-house nor field testing encountered this. The bug is that in processing these messages, we honor attachment disposition in a case where we should not. Any attachment which can't be displayed inline should never be allowed to become part of the synthetic body. We know of two types of attachments – TIFF and PDF, which can be displayed inline on Mac clients but not on Windows clients. The fix for these two also fixes any other types where the client might render inline but we can't. How did we miss this? We went back over our test code and test data and said “It contains PDFs, and validates the attachment well status.” It does indeed (and TIFFs as well). It also doesn't ever attempt to create a PDF attachment and render it inline in the body. That’s the missed case which I certainly wish we had found before it ever hit any customers. So to resolve this we're creating an interim update, and we’ll be rolling it into the main branch to prevent further incidents of this. Over time I expect that we'll expand the number of mailers we produce synthetic bodies for. For now, I think this experience has validated that we should keep it limited and expand support where we can closely test. I remain convinced that improving interop between Mac clients and Exchange is a good idea. So there we have it: The problem, the fix, the problem with the fix, and the fix for the problem with the fix. We now return to your regularly scheduled blog. Jason Nelson47KViews0likes20CommentsOutlook clients receive error 0x8004010f when downloading the Offline Address Book
EDIT 10/01/2008: To read the updated (and more comprehensive) guidance for troubleshooting errors 0x8004010f, please go here. There are a multiple reasons for why an Outlook client can receive the 0x8004010f sync error. Unfortunately, 0x8004010f is just a generic MAPI error and will show up for a variety of problems. Here is what the error looks like under err.exe (Microsoft Exchange Server Error Code Look-up Tool): C:\WXP\system32>err 0x8004010f # for hex 0x8004010f / decimal -2147221233 ecNotFound ec.h ecAttachNotFound ec.h ecUnknownRecip ec.h ecPropNotExistent ec.h MAPI_E_NOT_FOUND mapicode.h # 5 matches found for "0x8004010f" This is what the error looks like from the sync log from within Outlook: 12:45:53 Synchronizing Mailbox <dgoldman> 12:45:53 Done 12:45:54 Microsoft Exchange offline address book 12:45:54 0x8004010f Some of the most common reasons for Outlook clients to receive the 0x8004010f error with regards to downloading the OAB are listed below. Most of these are documented and I have linked articles to each of these to help everybody out. Please note that these solutions can change a small bit depending on unknown factors in a company's environment. An administrator decommissioned the last Exchange server in a site and never pushes replicas to another Exchange server. A new OAB is created in the active directory and the information store never reads the active directory during its maintenance schedule. This will result in the OAB files never being generated and the Outlook client will fail to download anything. The information store has an invalid EntryID that points to the legacy EX:/ folders. Again there is nothing for the client to download. An outlook client logging in from one domain to another domain and attempting to log in to another users mailbox. The OAB was never generated or some OAB folders are missing from the public folder store. Multiple OAB Version folders exist of the same type. Clients are attempting to download the OAB files from a public folder store that have not received the replicated updates. The offline address book list object has a missing address list. The offline address book list object has an incorrect address list. Send/As changes in the store affect users accounts with no mailbox full rights to another mailbox. If you are seeing this error on an Exchange 2007 server and your OAB is generated by an Exchange 2007 server, please make sure of the following: 1. Make sure that you have added the replicas of OAB to the Exchange 2007 server 2. Make sure public folder replication is working. 3. Make sure the OAB is public folder enabled and you have OAB Version 2, OAB Version 3 and OAB Version 4 checked off so your legacy clients can download the OAB files from the public folder store. 4. Make sure that if you are using an Outlook 2007 client, your OAB is Web Distribution enabled and the OAB files have been replicated over to the Client Access Server. For more information on this process please see this blog: http://blogs.msdn.com/dgoldman/archive/2006/10/23/outlook-client-fails-to-download-the-oab-with-error-0x8004011b.aspx and http://blogs.msdn.com/dgoldman/archive/2006/08/25/How-Exchange-2007-OAB-Files-are-replicated-to-a-Client-Access-Server-for-download.aspx 5. If you are removing your last Exchange 2003 server from the org, please make sure that you follow our documentation on this process. - Dave Goldman81KViews1like15CommentsPublic Folder Replication Troubleshooting - Part 4: Exchange Server 2007/2010 tips
Two years ago, I posted a three-part series on troubleshooting public folder replication. Part 1 discussed the replication of new data, Part 2 discussed the replication of existing data, and Part 3 discussed the replica deletion process and some common problems we saw with Exchange 2003. With this post, I want to update the series for Exchange 2007. In Exchange 2007, public folder replication works basically the same way it always has. The troubleshooting steps in the first three parts of the series all still apply. However, the admin tools have changed and the common problems we see with Exchange 2007 are a little different, so that's what I want to cover here. Changes in the Admin Tools The event log is still your best tool for narrowing down a replication problem to a particular point of failure. In Part 1, I suggested turning up logging on Replication Incoming and Replication Outgoing to Maximum. That still applies, except that with Exchange 2007 you'll be using the Set-EventLogLevel cmdlet to set "MSExchangeIS\9001 Public\Replication Incoming Messages" and "MSExchangeIS\9001 Public\Replication Outgoing Messages" to the Expert level. In Part 2, I described how to use the Synchronize Hierarchy and Synchronize Content options in ESM to force a status message and to timeout all outstanding backfill entries. You can still do this in Exchange 2007 via the Update-PublicFolderHierarchy and Update-PublicFolder cmdlets. These are also available in Sp1's public folder management tool, appearing as Update Hierarchy when the Public Folders root is selected and Update Content when a particular public folder is selected. Because you can use these from the command line, they are a lot more flexible than the Exchange 2003 options. For instance, it's now very simple to time out backfill entries for every folder that has a replica on your Exchange 2007 server with a simple "Get-PublicFolderStatistics | Update-PublicFolder" command. That wasn't possible in Exchange 2003 without a lot of clicking. In Part 3, I described how to use the Public Folder Instances view to see if the deletion of a replica has completed. In Exchange 2007, you use the Get-PublicFolderStatistics command to see that same information. Common Problems in Exchange 2007 The most common symptom I've seen so far is a store driver failure. For instance, a backfill response will be sent to an Exchange 2007 server, but if you look at the application log on the 2007 side you never see the incoming replication event. Message tracking shows that the replication message got to the hub transport server and then failed in the store driver. This can happen for a number of reasons, and fortunately it usually is not that hard to troubleshoot. Your best troubleshooting approach in this case is to use the Pipeline Tracing and Content Conversion Tracing available on the hub transport server. If you run "Get-TransportServer | fl" you'll see a few settings related to this: PipelineTracingEnabled : False ContentConversionTracingEnabled : False PipelineTracingPath : C:\Program Files\Microsoft\Exchange Server\TransportRoles\Logs\PipelineTracing PipelineTracingSenderAddress : SERVER01-IS@contoso.com To find out why your backfill response is failing in the store driver, set the PipelineTracingSenderAddress to match the SMTP address of the public folder store that's sending the backfill response. Then set ContentConversionTracingEnabled to $true and PipelineTracingEnabled to $true, and reproduce the problem. After doing so, have a look in the folder specified by the PipelineTracingPath. You should find a subfolder called ContentConversionTracing, and an InboundFailures folder within that. In the InboundFailures folder, you'll have an EML file containing the replication message itself and a TXT file containing some information on the failure. The top of the TXT file will often give you a useful clue as to the reason for the failure. For instance, in a few cases we've seen this output in the TXT file: Microsoft.Exchange.Data.Storage.PropertyValidationException: Property validation failed. Property = [{00020329-0000-0000-c000-000000000046}:'Keywords'] Categories Error = Element 0 in the multivalue property is invalid.. In this case it's complaining about the Categories property. This will happen if the public folder in question contains items on which the Categories property is set to be blank - such as with a single space - instead of truly being empty. You can see this by going into Outlook and choosing to view items by Category. You should find that there are two different sets of items with "None". To correct the problem, simply clear the Category on all of the "None" items using Outlook. You may have to set them to some other category and then set them back to None. Once the Categories property is truly clear, you'll only have one set of items that show "None", and the items will replicate successfully. Another example: Microsoft.Exchange.Data.Storage.PropertyValidationException: Property validation failed. Property = [{00062004-0000-0000-c000-000000000046}:0x8092] Email2AddrType Error = Email2AddrType is too long: maximum length is 9, actual length is 35.. In this case its flagging the Email2AddrType property as a problem. We discovered that it had somehow been populated with the full email address on certain contacts, instead of containing only the address type that it normally should, such as 'SMTP' or 'EX'. Fixing that property allowed the items to replicate. Sometimes the store driver will fail in a manner where it does not identify a specific problem property, such as in this output: Microsoft.Exchange.Data.Storage.ConversionFailedException: Message content has become corrupted. ---> Microsoft.Exchange.Data.Storage.ConversionFailedException: Content conversion failed due to corrupt TNEF (violation status: 0x00000800) This is what the failure looks like when you hit the problem described in KB 936000. Applying the fix to the Exchange 2003 server that's generating the replication message will correct that problem. The important thing to take away from this is that Exchange 2007 does a lot of property validation to keep bad data from getting into the store. Although that's a good thing, it can keep public folder data from replicating from your Exchange 2003 servers until problems with the content are corrected on the 2003 server. ContentConversionTracing can help you identify these problems, and will often point you to the exact property causing the problem. There's one more common problem you can identify with ContentConversionTracing, but it isn't caused by any actual problem with the content. The TXT file in the InboundFailures folder will look like this: Microsoft.Exchange.Data.Storage.ConversionFailedException: The content conversion limit has been exceeded. at Microsoft.Exchange.Data.Storage.InboundMimeConverter.ConvertToItem(MimePromotionFlags promotionFlags) at Microsoft.Exchange.Data.Storage.ItemConversion.InternalConvertAnyMimeToItem(Item itemOut, EmailMessage messageIn, InboundConversionOptions options, MimePromotionFlags promotionFlags, Boolean isStreamToStream) at Microsoft.Exchange.Data.Storage.ItemConversion.ConvertAnyMimeToItem(Item itemOut, EmailMessage messageIn, InboundConversionOptions options) InboundConversionOptions: - preferredCharset: iso-8859-1 - trustAsciiCharsets: True - isSenderTrusted: False - imceaResolveableDomain: contoso.com - preserveReportBody: False - clearCategories: True - userADSession: - recipientCache: Microsoft.Exchange.Data.Directory.Recipient.ADRecipientCache - clientSubmittedSecurely: False - serverSubmittedSecurely: False ConversionLimits: - maxMimeTextHeaderLength: 2000 - maxMimeSubjectLength: 255 - maxSize: 2147483647 - maxMimeRecipients: 12288 - maxRecipientPropertyLength: 1000 - maxBodyPartsTotal: 250 - maxEmbeddedMessageDepth: 30 - exemptPFReplicationMessages: True First, notice the top line says, "The content conversion limit has been exceeded." Normally a public folder replication message is exempt from size limits and such, so why would this message be failing in this way? Notice that "isSenderTrusted" is False. This means the message came over an SMTP connection that was not authenticated. The sending server either failed to authenticate, or never even tried. This is very similar to what I described in the Common Problems section of Part 3, where an authentication failure would cause the XEXCH50 verb to fail in Exchange 2003. Because the sending server did not authenticate, the Exchange 2007 server applies the normal size limits to the message. If this is a content replication message with more than 250 attachments (not uncommon for a content backfill response), then it fails because it exceeds the limit. This often happens because an administrator has created a second Receive Connector which does not allow authentication, but has it configured such that it listens on the internal IP rather than the external IP. If that's not the cause, you may need to use a Netmon capture to identify the problem as I described in Part 3. Conclusion That should cover everything you need to know to narrow down public folder replication problems in an environment with Exchange 2007. All the old troubleshooting steps still apply; we just have different administrative tools and a different set of problems. Hope this helps! - Bill Long21KViews0likes13CommentsRaising diagnostic logging for Message Access might cause calendar issues with Exchange 2007 SP2
There is potential for calendar related problems when new diagnostic logging for Message Access, which is available in Exchange Server 2007 Service Pack 2 (SP2) is raised from its default setting of Lowest. A Knowledge Base article and a fix are in the works at this time and will be available soon. Important - some calendar items may remain broken even after applying the update. This post addresses scenarios where those lingering symptoms remain. What the users may see Symptoms before applying the pending update: Access to recurring appointments (which have attachments for the instances) is broken - Outlook in online mode receives an "Item cannot be opened" error. Sending an embedded message in cached mode results in the attachment being stripped. Availability is not shown for some users. The following symptoms may persist, even after applying the update or manually setting the Message Access diagnostic level back to Lowest: Certain users show no availability information from Outlook or OWA scheduling assistant. Also, event id 4009 for MSExchange Availability is logged on servers with the CAS role Exception returned: Microsoft.Exchange.Data.Storage.ObjectNotFoundException: Cannot open embedded message. Delegates viewing calendars receive the error: Cannot read on instance of this recurring appointment. Close any open appointments and try again, or recreate the appointment Messages are sent to ActiveSync devices with the following text: Microsoft Exchange was unable to send the following items to your mobile device. These items have not been deleted. You should be able to access them using either Outlook or Outlook Web Access. When accessing Calendar from OWA, the day, week or month viewing will fail with the error: The item that you attempted to access no longer exists. We have determined these symptoms are primarily due to calendar items affected between the time logging was increased and when the pending update or workaround is implemented. Recurring calendar items with no end date that have had an occurrence modified seem most susceptible. A quick method to find these visually is to look for the circling arrows with a line through it. Does this apply to you? Before the release of the pending update, if any Exchange Server 2007 SP2 server with the Mailbox role has the following new event log level raised from Lowest, this applies to you. MSExchangeIS\9000 Private\Message Access How to check your Organization for the problem You can determine if your MBX servers are at risk by looking in the following places: 1) The new GUI introduced in SP2 - in the Exchange Management Console under Server Configuration, Mailbox, select the server and choose Manage Diagnostic Logging Properties... 2) In the registry for each MBX server [Lowest = 0] 3) Run the following Exchange CMDlet to find all Exchange 2007 MBX servers and this specific diagnostic logging level for Message Access: Get-MailboxServer | foreach {Get-EventLogLevel -id ($_.name + "\MSExchangeIS\9000 Private\Message Access")} How to correct the problem If any MBX server is found to have logging above the default before the pending update is applied, you should reset it to Lowest manually. Note which MBX servers are configured with the non-default level and then run this CMDlet to ensure they are all set to "Lowest" Then either remount the databases or restart the Information Store service. Get-MailboxServer | foreach {Set-EventLogLevel -id ($_.name + "\MSExchangeIS\9000 Private\Message Access") -Level "Lowest"} A sample PowerShell script is available as an attachment on this blog postto track down calendar items contributing to the symptoms that persist after applying the workaround detailed above. This script will identify the day containing problem appointments and can be run against a specific mailbox or all Exchange 2007 mailboxes. The requirements for running the script are detailed in the script comments. The sample scriptuses the $true argument to enumerate all Exchange 2007 mailboxes and user42@contoso.com to initialize the Autodiscover portion of the Web Services object: [PS] C:\Powershell\scripts> .\Find-BadCalendarItems.ps1 user42@contoso.com $true Checking mailbox: user01@contoso.com Checking mailbox: user02@contoso.com ... Checking mailbox: user42@contoso.com Checking mailbox: repro01@contoso.com Failed: 11/30/2009 - 12/30/2009 Error: Mailbox logon failed., inner exception: Cannot open embedded message. Day failed: 12/2/2009 Checking mailbox: repro02@contoso.com Failed : 11/30/2009 - 12/30/2009 Error: Mailbox logon failed., inner exception: Cannot open embedded message. Day failed: 12/23/2009 Checking mailbox: user43@contoso.com Checking mailbox: lastuser@contoso.com Problems found: repro01@contoso.com: 12/2/2009 repro02@contoso.com: 12/23/2009 Done! Now that 12/23/2009 has been identified as the problem date for user repro02@contoso.com, you can use Outlook to find any recurring calendar items with no end date that have had an occurrence modified on that day. Copy that occurrence [either to a temporary Calendar folder or even to a different time that day] then delete just that occurrence. Moving the copy back or manually recreating the instance will resolve the symptom for that user. - Austin McCollum, Bill Long, Jon Runyon, Kris Waters9.1KViews0likes6CommentsScripting Corner Volume 4
I am always on the lookout for new and interesting scripts that are worth writing and worth writing about. Recently, I got two requests for scripts that I thought were useful so felt I would share them here. Both of these scripts were used to solve real world customer requests and both of them will be used multiple times in their environment. Move-Mailboxcsv The customer scenario for this script was that the customer was regularly moving a wide range of mailboxes between servers and databases to support their mailbox organization structure and requests for additional mailbox quota etc. They had written their own script to read a csv file of mailboxes to move and then do the move using foreach with the move-mailbox cmdlet. Since they were piping the csv input directly into the foreach it was only operating on one mailbox at a time. This resulted in them losing the multi threaded nature of the move-mailbox cmdlet. In reviewing the example CSV file the customer provided I realized that they were moving from multiple source locations but were moving to only a handful of target locations. This pointed me in the direction of optimizing the mailbox moves based on target mailbox. Thus the script I was able to provide them reads in the CSV file and then identifies all unique combinations of Server/Storage Group/Mailbox Database and processes the moves in batches based on the unique target database. Break down of the script The actual structure of this one is very simple thanks to the select-object cmdlet. This cmdlet has the parameter -unique that will return only the objects whose values are unique based on the properties you have asked it filter on. So for the script function "SelectUnique" all we have to do is take the entire contents of the CSV file that we imported into $csv using import-csv and run it thru select object asking it to only pick the unique combinations of targetmbserver, targetmbsg, and targetmbdb. Once we have all of the unique combinations in the $targets array we have everything we need to group the users together by target database. Piping the unique sets into the MoveMBX function we can simply use a where statement to filter out the group of users for a given target database and move them in bulk. For this script I was able to implement code that will ask the user for input if nothing is specified for the required -file script parameter . (Special thanks to live search and to Windows Powershell in Action by Bruce Payette for helping me figure this out .) [string] $file = $(if ($help -eq $false){Read-Host "CSV File"}) This uses the fact that you can specify a default value for a parameter and that you can use a script block to do that. So as long as -help was not specified on the command line and no value was given for -file then it will prompt the user asking for the CSV file. One of the things I love about powershell is every time I sit down to do a script I learn something new. New-DBFromXML I was contacted by one of our Premier Field Engineers with a script that her customer was working on for doing bulk Mailbox and Storage Group creation. Originally the script used a CSV file and required that you specified the SG Name, Log File Path, DB Name, and DB Path for each database by hand in the CSV file. Looking at their file I realized that like most customer they had a naming scheme that used a prefix follow by a number to indicate each storage group and database. Since this information was unchanging from server to server why keep working with it in the CSV file. With a bit of playing with the XML type in powershell I was able to come up with a script that with a simple XML configuration file will create a specified number of Storage Group / Database combinations on a server. Break down of the script The only function I created for this script was the showhelp function for outputting the usage information to the host. The actions needed in this script were very straight forward and only did one thing so I felt there was no reason to do the work inside of a function and just put everything in the main body. The key here is that the XML file has a list of all valid Database Drive locations and all Valid Log file locations. The only other thing we need to specify then is the prefixes we need for the names and paths then the number of databases that we want to have created. The nice thing about working with XML files in powershell is that you can just use the same dot notation you are used to using when working with objects to get to information in an XML file. If you are interested in using XML files in script I would recommend you play with them by importing an XML file into a variable and then just play around with the variable to get a feel for how they work in powershell. [xml]$xml = get-content $file You can even modify the XML file while it is loaded in as a variable and then using the Save Method save it back to disk in the modified form. $xml.save("C:\temp\myxml.xml") Once I have the configuration XML file loaded I pull the drive locations into some arrays to make them easy to work with. Next we setup a while loop that will do the work of creating the storage group and databases. The only hard part here is constructing all of the paths and names into the form that the cmdlet needs based on the information from the XML. Once that is done it is simply a matter of incrementing our indexes and running thru the loop again to create the next storage group / Mailbox Database. For this particular customer implementation we didn't put any logic in to group the database instead we went with a simple rotation of file locations. What this means is that if I specified four database paths and then told the script to create twelve Storage Group / Mailbox Databases I would end up with them looking like this on the disk: Drive 1 Drive 2 Drive 3 Drive 4 Database 1 Database 5 Database 9 Database 2 Database 6 Database 10 Database 3 Database 7 Database 11 Database 4 Database 8 Database 12 This was an implementation the customer was happy with since all of the folders and files were labeled clearly it was still easy to find the databases and using this method it was easy for the script to support creating a number of databases that were not a multiple of the locations provided. Conclusion Hopefully these scripts have shown you some of the things that real world customers have started using powershell to do for them and have sparked your interest in creating your own scripts. As always if you see anything I missed, need to clarify, or you want to comment on please don't hesitate I welcome all feedback. Also I need your script Ideas . I have had a few submitted and I am hoping to include at least one of them in a future scripting corner. Download the scripts, attached to this blog post. Please Note!Thesescriptsare not officially supported by Microsoft. Please see the scripts for details! - Matthew Byrd1.9KViews0likes6CommentsUse Exchange Web Services and PowerShell to Discover and Remove Direct Booking Settings
Update 7/15/2013: we have made a few updates to the below blog post to adjust the instructions for the release of the version 2 of the script. Prior to Exchange 2007, there were two primary methods of implementing automated resource scheduling – Direct Booking and the AutoAccept Agent (a store event sink released as a web download for Exchange 2003). In Exchange 2007, we changed how automated resource scheduling is implemented. The AutoAccept Agent is no longer supported, and the Direct Booking method, technically an Outlook function, has been replaced with server-side calendar booking function called the Resource Booking Attendant. Note: There are various terms associated with this new Resource Booking function, such as: Calendar Processing, Automatic Resource Booking, Calendar Attendant Processing, Automated Processing and Resource Booking Assistant. We will be using the “Resource Booking Attendant” nomenclature for this article. While the Direct Booking method for resource scheduling can indeed work on Exchange Server 2007/2010/2013, we strongly recommend that you disable Direct Booking for resource mailboxes and use the Resource Booking Attendant instead. Specifically, we are referring to the “AutoAccept” Automated Processing feature of the Resource Booking Attendant, which can be enabled for a mailbox after it has been migrated to Exchange 2007 or later and upgraded to a Resource Mailbox. Note: The published resource mailbox upgrade guidance on TechNet specifies to disable Direct Booking in the resource mailbox while still on Exchange 2003, move the mailbox, and then enable the AutoAccept functionality via the Resource Booking Attendant. This order of steps can introduce an unnecessary amount of time where the resource mailbox may be without automated scheduling capabilities. We are currently working to update that guidance to reflect moving the mailbox first, and only then proceed with disabling the Direct Booking functionality, after which the AutoAccept functionality via the Resource Booking Attendant can be immediately enabled. This will shorten the duration where the mailbox is without automated resource scheduling capabilities. This conversion process to resource mailboxes utilizing the Resource Booking Attendant is sometimes an honest oversight or even deliberately ignored when migrating away from Exchange 2003 due to Direct Booking’s ability to continue to work with newer versions of Exchange, even Exchange Online. This will often result in resource mailboxes (or even user mailboxes!) with Direct Booking functionality remaining in place long after Exchange 2003 is ancient history in the environment. Why not just leave Direct Booking enabled? There are issues that can arise from leaving Direct Booking enabled, from simple administrative burden scenarios all the way to major calendaring issues. Additionally, Resource Booking Attendant offers advantages over Direct Booking functionality: Direct Booking capabilities, technically an Outlook function, has been deprecated from the product as of Outlook 2013. It was already on the deprecation list in Outlook 2010 and required a registry modification to reintroduce the functionality. Direct Booking and Resource Booking Attendant are conflicting technologies, and if simultaneously enabled, unexpected behavior in calendar processing and item consistency can occur. Outlook Web App (as well as any non-MAPI clients, like Exchange ActiveSync (EAS) devices) cannot use Direct Booking for automated resource scheduling. This is especially relevant for Outlook Web App-only environments where the users do not have Microsoft Outlook as a mail client. The Resource Booking Attendant AutoAccept functionality is a server-side solution, eliminating the need for client-side logic in order to automatically process meeting requests. How do I check which mailboxes have Direct Booking Enabled? How does one validate if Direct Booking settings are enabled on mailboxes in the organization, especially if mailboxes had previously been hosted on Exchange 2003? Figure 1: Checking Direct Booking settings in Microsoft Outlook 2010 Unfortunately, the manual steps involve assigning permissions to all mailboxes, creating MAPI profiles for each mailbox, logging into each mailbox, checking Tools > Options > Calendar > Resource Scheduling, note which of the three Direct Booking checkboxes are checked, click OK/Cancel a few times, log out of mailbox. Whew! That can be a major undertaking even for a small to midsize company that has more than a handful of mailboxes! Having staff perform this type of activity manually can be a costly and tedious endeavor. Once you have discovered which mailboxes have the Direct Booking settings enabled, you would then have to repeat this entire process to disable these settings unless you removed them at the time of discovery. Having an automated method to discover, track, and even disable Direct Booking settings would be nice right? Look no further, we have the solution for you! Using Exchange Web Services (EWS) and PowerShell, we can automate the discovery of Direct Booking settings that are enabled, track the results, and even disable them! We wrote Remove-DirectBooking.ps1, a sample script, to do exactly that and even more to aid in automating this manual effort. The script is attached to this blog post. After you've downloaded it, rename the file and remove the .txt extension. IMPORTANT: The previously uploaded script had the last line truncated to Stop-Tran (instead of Stop-Transcript). We've uploaded an updated version to TechNet Gallery. If you downloaded the previous version of the script, please download the updated version. Alternatively, you can open the previously downloaded version in Notepad or other text editor and correct the last line to Stop-Transcript. Let’s break down the major tasks the PowerShell script does: Uses EWS Application Impersonation to tap into a mailbox (or set of mailboxes) and read the three MAPI properties where the Direct Booking settings are stored. It does this by accessing the localfreebusy item sitting in the NON_IPM_SUBTREE\FreeBusy Data folder, which resides in the root of the Information Store in the mailbox. The three MAPI properties and their equivalent Outlook settings the script looks at are: 0x686d Automatically accept meeting requests and remove canceled meetings 0x686f Automatically decline meeting requests that conflict with an existing appointment or meeting 0x686e Automatically decline recurring meeting requests These three properties contain Boolean values mirroring the Resource Scheduling checkboxes found in Outlook (see Figure 1 above). For each mailbox processed, the script attempts to locate the corresponding free/busy message stored in the ‘Schedule+ Free/Busy’ system Public Folder representing the user. This item must be updated just like the user’s local mailbox item – the two items must be consistent in their settings. We need to do this because Outlook only considers the settings in the Public Folder free/busy item when a user attempts to Direct Book a resource. Therefore, it is critical that the script checks for the Public Folder item’s existence and its settings are in sync with the localfreebusy item stored in the mailbox itself. For mailboxes where Direct Booking settings were detected, it checks for conflicts by determining if the mailbox also has Resource Booking Attendant enabled with AutomateProcessing set to AutoAccept. Optionally, disables any enabled Direct Booking settings encountered. Note: It is important to understand that by default the script runs in a read-only mode. Additional command line switches are available to run the script to disable Direct Booking settings. Writes a detailed runtime processing log to console and log file. Creates a simple output text file containing a list of mailboxes that can be later leveraged as an input file to feed the script for disabling the Direct Booking functionality. Creates a CSV file containing statistics of the list of mailboxes processed with detailed information, such as what was discovered, any errors encountered, and optionally what was disabled. This is useful for performing analysis in the discovery phase and can also be used as another source to create an input file to feed into the script for disabling the Direct Booking functionality. Example Scenarios Here are a couple of example scenarios that illustrate how to use the script to discover and remove enabled Direct Booking settings. Scenario 1 You've recently migrated from Exchange 2003 to Exchange 2010 and would like to disable Direct Booking for your company’s conference room mailboxes as well as any user mailboxes that may have Direct Booking settings enabled. The administrator’s logged in account has Application Impersonation rights and the View-Only Recipients RBAC role assigned. On a machine that has the Exchange management tools & the Exchange Web Services API 1.2 or greater installed, open the Exchange Management Shell, navigate to the folder containing the script, and run the script using the following syntax: .\Remove-DirectBooking.ps1 –identity * -UseDefaultCredentials The script will process all mailboxes in the organization with detailed logging sent to the shell on the console. Note, depending the number of mailboxes in the org, this may take some time to complete When the script completes, open the Remove-DirectBooking_<timestamp>.txt file in Notepad, which will contain list of mailboxes that have Direct Booking enabled: Figure 2: Output file containing list of mailboxes with Direct Booking enabled After reviewing the list, rerun the script with the InputFile parameter and the RemoveDirectBookingswitch: .\Remove-DirectBooking.ps1 –InputFile ‘.\Remove-DirectBooking_<timestamp>.txt’ –UseDefaultCredentials -RemoveDirectBooking The script will process all the mailboxes listed in the input file with detailed logging sent to the shell on the console. Because you specified the RemoveDirectBooking switch, it does not run in read-only mode and disables all currently enabled Direct Booking settings encountered. When the script completes, you can check the status of the removal operation by checking the Remove-DirectBooking_<timestamp>.csv file. A column called Direct Booking Removed? will record if the removal was successful. You can also check the runtime processing log file RemoveDirectBooking_<timestamp>.logas well. Figure 3: Reviewing runtime log file in Excel Note The Direct Booking Removed? column now shows Yes where applicable, but the three Direct Booking settings colum ns still show their various values as “Yes”; this is because we record those three values pre-removal. If you were to run the script again in read-only mode against the same input file, those columns would reflect a value of N/A since there would no longer be any Direct Booking settings enabled. The Resource Room?, AutoAccept Enabled?, and Conflict Detected all have a value of N/A regardless because they are not relevant when disabling the Direct Booking settings. Scenario 2 You're an administrator who's new to an organization. You know that they migrated from Exchange 2003 to Exchange 2007 in the distant past and are currently in the process of implementing Exchange 2010, having already migrated some users to Exchange 2010. You have no idea what resources mailboxes or even user mailboxes may be using Direct Booking and would like to discover who has what Direct Booking settings enabled. You would then like to selectively choose which mailboxes to pilot for Direct Booking removal before taking action on the majority of found mailboxes. Here's how you would accomplish this using the Remove-DirectBooking.ps1 script: Obtain a service account that has Application Impersonation rights for all mailboxes in the org. Ensure service account has at least Exchange View-Only Administrator role (2007) and at least have an RBAC Role Assignment of View Only Recipients (2010/2013). On a machine that has the Exchange management tools & the Exchange Web Services API 1.2 or greater installed, preferably an Exchange 2010 server, open the Exchange Management Shell, navigate to the folder containing the script, and run the script using the following syntax: .\Remove-DirectBooking.ps1 –Identity * The script will prompt you for the domain credentials of the account you wish to use because no credentials were specified. Enter the service account’s credentials. The script will process all mailboxes in the organization with detailed logging sent to the shell on the console. Note, depending the number of mailboxes in the org, this may take some time to complete. When the script completes, open the Remove-DirectBooking_<timestamp>.csv in Excel, which will looks something like: Figure 4: Reviewing the Remove-DirectBooking_<timestamp>.csv in Excel Filter or sort the table by the Direct Booking Enabled? column. This will provide a list that can be scrutinized to determine which mailboxes are to be piloted with Direct Booking removal, such as those that have conflicts with already having the Resource Booking Attendant’s Automated Processing set to AutoAccept (which you can also filter on using the AutoAccept Enabled? column). Once the list has been reviewed and the targeted mailboxes isolated, simply copy their email addresses into a text file (one address per line), save the text file, and use it as the input source for the running the script to disable the Direct Booking settings: .\Remove-DirectBooking.ps1 –InputFile ‘.\’ -RemoveDirectBooking As before, the script will prompt you for the domain credentials of the account you wish to use. Enter the service account’s credentials. The script will process all the mailboxes listed in the input file with detailed logging sent to the shell on the console. It will disable all enabled Direct Booking settings encountered. Use the same validation steps at the end of the previous example to verify the removal was successful. Script Options and Caveats Please see the script’s help section (via “get-help .\remove-DirectBooking.ps1 -full”) for full information on all the available parameters. Here are some additional options that may be useful in certain scenarios: EWSURL switch parameter. By default, the script will attempt to retrieve the EWS URL for each mailbox via AutoDiscover. This is preferred, especially in complex multi-datacenter or hybrid Exchange Online/On-premises environments where different EWS URLs may be in play for any given mailbox depending on where it resides in the org. However, there may be times where one would want to supply an EWS URL manually, such as when AutoDiscover is having “issues”, or the response time for AutoDiscover requests is introducing delays in overall script execution (think very large quantity of number of mailbox identities to churn through) and the EWS URL is the same across the org, etc. In these situations, one can use the EWSURL parameter to feed the script a static EWS URL. UseDefaultCredentials If the current user is the service account or perhaps simply has both the Impersonation and the necessary Exchange Admin rights per the script’s requirements and they don’t wish to be prompted to type in a credential (another great example is scheduling the script to run as a job for instance), you can use the UseDefaultCredentials to run the script under that security context. RemoveDirectBooking By default, the script runs in read-only mode. In order to make changes and disable Direct Booking settings on the mailbox, you mus specify the RemoveDirectBooking switch. The script does have several prerequisites and caveats to ensure proper operation and meaningful results: Application Impersonation rights and minimum Exchange Admin rights must be used For Exchange 2007, see Configuring Exchange Impersonation (Exchange Web Services) For Exchange 2010/2013, see Configuring Exchange Impersonation (uses RBAC ) Exchange Web Services Managed API 1.2 or later must be installed on the machine running the script Exchange management tools must be installed on the machine running the script Script must be executed from within the Exchange Management Shell The Shell ses sion must have the appropriate execution policy to allow the script to be executed (by default, you can't execute unsigned scripts). AutoDiscover must be configured correctly (unless the EWS URL is entered manually) Exchange 2003-based mailboxes cannot be targeted due to lack of EWS capabilities In an Exchange 2010/2013 environment that also has Exchange 2007 mailboxes present, the script should be executed from a machine running Exchange 2010/2013 management tools due to changes in the cmdlets in those versions Due to limitations in the EWS architecture, the Schedule+ Free/Busy System Folder subfolders must contain a replica on each version of Exchange that users are being processed for; additionally, the user's Exchange mailbox database 'Default public folder database' property must be set to a Public Folder database that is on the same version of Exchange as the mailbox database. Summary The discovery and removal of Direct Booking settings can be a tedious and costly process to perform manually, but you can avoid and automate it using current functions and features via PowerShell and EWS in Microsoft Exchange Server 2007, 2010, & 2013. With careful use, the Remove-DirectBooking.ps1 script can be a valuable tool to aid Exchange administrators in maintaining automated resource scheduling capabilities in their Microsoft Exchange environments. Your feedback and comments are welcome. Thank you to Brian Day and Nino Bilic for their guidance in content review, and to our customers (you know who you are) for piloting the script. Seth Brandes & Dan Smith38KViews0likes37CommentsAnalyzing Exchange Transaction Log Generation Statistics
Update 1/31/2017: Please see the updated version of this post that explains a significant update to this script. To download the script, see the attachment to this blog post. Overview When designing a site resilient Exchange Server solution, one of the required planning tasks is to determine how many transaction logs are generated on an hourly basis. This helps figure out how much bandwidth will be required when replicating database copies between sites, and what the effects will be of adding additional database copies to the solution. If designing an Exchange solution using the Exchange Server Role Requirements Calculator, the percent of logs generated per hour is an optional input field. Previously, the most common method of collecting this data involved taking captures of the files in each log directory on a scheduled basis (using dir, Get-ChildItem, or CollectLogs.vbs). Although the log number could be extracted by looking at the names of the log files, there was a lot of manual work involved in figuring out the highest the log generation from each capture, and getting rid of duplicate entries. Once cleaned up, the data still had to be analyzed manually using a spreadsheet or a calculator. Trying to gather data across multiple servers and databases further complicated matters. To improve upon this situation, I decided to write an all-in-one script that could collect transaction log statistics, and analyze them after collection. The script is called GetTransactionLogStats.ps1. It has two modes: Gather and Analyze. Gather mode is designed to be run on an hourly basis, on the top of the hour. When run, it will take a single set of snapshots of the current log generation number for all configured databases. These snapshots will be sent, along with the time the snapshots were taken, to an output file, LogStats.csv. Each subsequent time the script is run in Gather mode, another set of snapshots will be appended to the file. Analyze mode is used to process the snapshots that were taken in Gather mode, and should be run after a sufficient amount of snapshots have been collected (at least 2 weeks of data is recommended). When run, it compares the log generation number in each snapshot to the previous snapshot to determine how many logs were created during that period. Script Features Less Data to Collect Instead of looking at the files within log directories, the script uses Perfmon to get the current log file generation number for a specific database or storage group. This number, along with the time it was obtained, is the only information kept in the output log file, LogStats.csv. The performance counters that are used are as follows: Exchange 2013/2016 MSExchangeIS HA Active Database\Current Log Generation Number Exchange 2010 MSExchange Database ==> Instances\Log File Current Generation Note: The counter used for Exchange 2013/2016 contains the active databases on that server, as well as any now passive databases that had been activated on that server at some point since the last reboot. The counter used for Exchange 2010 contains all databases on that server, including all passive copies. To only get data from active databases, make sure to manually specify the databases for that server in the TargetServers.txt file. Alternately you can use the DontAnalyzeInactiveDatabases parameter when performing the analysis to exclude databases that did not increment their log count. Multi Server/Database Support The script takes a simple input file, TargetServers.txt, where each line in the file specifies the server, or server and databases to process. If you want to get statistics for all databases on a server, only the server name is necessary. If you want to only get a subset of databases on a server (for instance if you wanted to omit secondary copies on an Exchange 2010 server), then you can specify the server name, followed by each database you want to process. Built In Analysis Capability The script has the ability to analyze the output log file, LogStats.csv, that was created when run in Gather mode. It does a number of common calculations for you, but also leaves the original data in case any other calculations need to be done. Output from running in Analyze mode is sent to multiple .CSV files, where one file is created for each database, and one more file is created containing the average statistics for all analyzed databases. The following columns are added to the CSV files: Hour: The hour that log stats are being gathered for. Can be between 0 – 23. TotalLogsCreated: The total number of logs created during that hour for all days present in LogStats.csv. TotalSampleIntervalSeconds: The total number of seconds between each valid pair of samples for that hour. Because the script gathers Perfmon data over the network, the sample interval may not always be exactly one hour. NumberOfSamples: The number of times that the log generation was sampled for the given hour. AverageSample: The average number of logs generated for that hour, regardless of sample interval size. Formula: TotalLogsCreated / NumberOfSamples. PercentDailyUsage: The percent of all logs that that particular hour accounts for. Formula: LogsCreatedForHour / LogsCreatedForAllHours * 100. PercentDailyUsageForCalc: The ratio of all logs for this hour compared to all logs for all hours. Formula: LogsCreatedForHour / LogsCreatedForAllHours. verageSamplePer60Minutes: Similar to AverageSample, but adjusts the value like each sample was taken exactly 60 minutes apart. Formula: TotalLogsCreated / TotalSampleIntervalSeconds * 3600. Database Heat Map As of version 2.0, this script now also generates a database heat map when run in Analyze mode. The heat map shows how many logs were generated for each database during the duration of the collection. This information can be used to figure out if databases, servers, or entire Database Availability Groups, are over or underutilized compared to their peers. The database heat map consists of two files: HeatMap-AllCopies.csv: A heat map of all tracked databases, including databases that may have failed over during the collection duration, and were tracked on multiple servers. This heat map shows the server specific instance of each database. Example: HeatMap-DBsCombined.csv: A heat map containing only a single instance of each unique database. In cases where multiple copies of the same database had generated logs, the log count from each will be combined into a single value. Example: Requirements The script has the following requirements; Target Exchange Servers must be running Exchange 2010, 2013, or 2016 PowerShell Remoting must be enabled on the target Exchange Servers, and configured to allow connections from the machine where the script is being executed. Parameters The script has the following parameters: -Gather: Switch specifying we want to capture current log generations. If this switch is omitted, the -Analyze switch must be used. -Analyze: Switch specifying we want to analyze already captured data. If this switch is omitted, the -Gather switch must be used. -ResetStats: Switch indicating that the output file, LogStats.csv, should be cleared and reset. Only works if combined with –Gather. -WorkingDirectory: The directory containing TargetServers.txt and LogStats.csv. If omitted, the working directory will be the current working directory of PowerShell (not necessarily the directory the script is in). -LogDirectoryOut: The directory to send the output log files from running in Analyze mode to. If omitted, logs will be sent to WorkingDirectory. -MaxSampleIntervalVariance: The maximum number of minutes that the duration between two samples can vary from 60. If we are past this amount, the sample will be discarded. Defaults to a value of 10. -MaxMinutesPastTheHour: How many minutes past the top of the hour a sample can be taken. Samples past this amount will be discarded. Defaults to a value of 15. -MonitoringExchange2013: Whether there are Exchange 2013/2016 servers configured in TargetServers.txt. Defaults to $true. If there are no 2013/2016 servers being monitored, set this to $false to increase performance. -DontAnalyzeInactiveDatabases: When running in Analyze mode, this specifies that any databases that have been found that did not generate any logs during the collection duration will be excluded from the analysis. This is useful in excluding passive databases from the analysis. Usage Runs the script in Gather mode, taking a single snapshot of the current log generation of all configured databases: PS C:\> .\GetTransactionLogStats.ps1 -Gather Runs the script in Gather mode, and indicates that no Exchange 2013/2016 servers are configured in TargetServers.txt: PS C:\> .\GetTransactionLogStats.ps1 -Gather -MonitoringExchange2013 $false Runs the script in Gather mode, and changes the directory where TargetServers.txt is located, and where LogStats.csv will be written to: PS C:\> .\GetTransactionLogStats.ps1 -Gather -WorkingDirectory "C:\GetTransactionLogStats" -ResetStats Runs the script in Analyze mode: PS C:\> .\GetTransactionLogStats.ps1 -Analyze Runs the script in Analyze mode, and excludes database copies that did not generate any logs during the collection duration: PS C:\> .\GetTransactionLogStats.ps1 -Analyze -DontAnalyzeInactiveDatabases $true Runs the script in Analyze mode, sending the output files for the analysis to a different directory. Specifies that only sample durations between 55-65 minutes are valid, and that each sample can be taken a maximum of 10 minutes past the hour before being discarded: PS C:\> .\GetTransactionLogStats.ps1 -Analyze -LogDirectoryOut "C:\GetTransactionLogStats\LogsOut" -MaxSampleIntervalVariance 5 -MaxMinutesPastTheHour 10 Example TargetServers.txt The following example shows what the TargetServers.txt input file should look like. For the server1 and server3 lines, no databases are specified, which means that all databases on the server will be sampled. For the server2 and server4 lines, we will only sample the specified databases on those servers. Note that no quotes are necessary for databases with spaces in their names. Output File After Running in Gather Mode When run in Gather mode, the log generation snapshots that are taken are sent to LogStats.csv. The following shows what this file looks like: Output File After Running in Analyze Mode The following shows the analysis for a single database after running the script in Analyze mode: Running As a Scheduled Task Since the script is designed to be run an hourly basis, the easiest way to accomplish that is to run the script via a Scheduled Task. The way I like to do that is to create a batch file which calls Powershell.exe and launches the script, and then create a Scheduled Task which runs the batch file. The following is an example of the command that should go in the batch file: powershell.exe -noninteractive -noprofile -command "& {C:\LogStats\GetTransactionLogStats.ps1 -Gather -WorkingDirectory C:\LogStats}" In this example, the script, as well as TargetServers.txt, are located in C:\LogStats. Note that I specified a WorkingDirectory of C:\LogStats so that if the Scheduled Task runs in an alternate location (by default C:\Windows\System32), the script knows where to find TargetServers.txt and where to write LogStats.csv. Also note that the command does not load any Exchange snapin, as the script doesn’t use any Exchange specific commands. Notes The following information only applies to versions of this script older than 2.0: By default, the Windows Firewall on an Exchange 2013 server running on Windows Server 2012 does not allow remote Perfmon access. I suspect this is also the case with Exchange 2013 running on Windows Server 2008 R2, but haven’t tested. If either of the below errors are logged, you may need to open the Windows Firewall on these servers to allow access from the computer running the script. ERROR: Failed to read perfmon counter from server SERVERNAME ERROR: Failed to get perfmon counters from server SERVERNAME Update: After noticing that multiple people were having issues getting this to work through the Windows Firewall, I tried enabling different combinations of built in firewall rules until I could figure out which ones were required. I only tested on an Exchange 2013 server running on Windows Server 2012, but this should apply to other Windows versions as well. The rules I had to enable were: File and Printer Sharing (NB-Datagram-In) File and Printer Sharing (NB-Name-In) File and Printer Sharing (NB-Session-In) Mike Hendrickson Updates 11/5/2013 added a section on firewall rules to try. 7/17/2014 added a section on running as a scheduled task. 3/28/2016 Version 2.0: Instead of running Get-Counter -ComputerName to remotely access Perfmon counters, the script now uses PowerShell Remoting, specifically Invoke-Command -ComputerName, so that all counter collection is done locally on each target server. This significantly speeds up the collection duration. The script now supports using the -Verbose switch to provide information during script execution. Per Thomas Stensitzki's script variation, added in functionality so that DateTime's can be properly parsed on non-English (US) based computers. Added functionality to generate a database heat map based on log usage. 6/22/2016 Version 2.1: When run in -Gather mode, the script now uses Test-WSMan against each target computer to verify Remote PowerShell connectivity prior to doing the log collection. Added new column to log analysis files, PercentDailyUsageForCalc, which allows for direct copy/paste into the Exchange Server Role Requirements Calculator. Additionally, the script will try to ensure that all rows in the column add up to exactly 1 (requires samples from all 24 hours of the day). Significantly increased performance of analysis operations.46KViews0likes22Comments