exchange 2010
308 TopicsMaking your public folder migrations faster and more reliable
The key for a successful migration (of any type) is to ensure that source data is in a healthy condition. Public folder migrations are no different, especially if you have been using public folders for years. Orphaned ACLs, mis-matched Mail Enabled Public Folder objects (MEPF’s), or corrupted dumpster folders can cause public folder migrations to slow down considerably, if not fail altogether.33KViews5likes18CommentsIntroducing: Log Parser Studio
To download the Log Parser Studio, please see the attachment on this blog post. Anyone who regularly uses Log Parser 2.2 knows just how useful and powerful it can be for obtaining valuable information from IIS (Internet Information Server) and other logs. In addition, adding the power of SQL allows explicit searching of gigabytes of logs returning only the data that is needed while filtering out the noise. The only thing missing is a great graphical user interface (GUI) to function as a front-end to Log Parser and a ‘Query Library’ in order to manage all those great queries and scripts that one builds up over time. Log Parser Studio was created to fulfill this need; by allowing those who use Log Parser 2.2 (and even those who don’t due to lack of an interface) to work faster and more efficiently to get to the data they need with less “fiddling” with scripts and folders full of queries. With Log Parser Studio (LPS for short) we can house all of our queries in a central location. We can edit and create new queries in the ‘Query Editor’ and save them for later. We can search for queries using free text search as well as export and import both libraries and queries in different formats allowing for easy collaboration as well as storing multiple types of separate libraries for different protocols. Processing Logs for Exchange Protocols We all know this very well: processing logs for different Exchange protocols is a time consuming task. In the absence of special purpose tools, it becomes a tedious task for an Exchange Administrator to sift thru those logs and process them using Log Parser (or some other tool), if output format is important. You also need expertise in writing those SQL queries. You can also use special purpose scripts that one can find on the web and then analyze the output to make some sense of out of those lengthy logs. Log Parser Studio is mainly designed for quick and easy processing of different logs for Exchange protocols. Once you launch it, you’ll notice tabs for different Exchange protocols, i.e. Microsoft Exchange ActiveSync (MAS), Exchange Web Services (EWS), Outlook Web App (OWA/HTTP) and others. Under those tabs there are tens of SQL queries written for specific purposes (description and other particulars of a query are also available in the main UI), which can be run by just one click! Let’s get into the specifics of some of the cool features of Log Parser Studio … Query Library and Management Upon launching LPS, the first thing you will see is the Query Library preloaded with queries. This is where we manage all of our queries. The library is always available by clicking on the Library tab. You can load a query for review or execution using several methods. The easiest method is to simply select the query in the list and double-click it. Upon doing so the query will auto-open in its own Query tab. The Query Library is home base for queries. All queries maintained by LPS are stored in this library. There are easy controls to quickly locate desired queries & mark them as favorites for quick access later. Library Recovery The initial library that ships with LPS is embedded in the application and created upon install. If you ever delete, corrupt or lose the library you can easily reset back to the original by using the recover library feature (Options | Recover Library). When recovering the library all existing queries will be deleted. If you have custom/modified queries that you do not want to lose, you should export those first, then after recovering the default set of queries, you can merge them back into LPS. Import/Export Depending on your need, the entire library or subsets of the library can be imported and exported either as the default LPS XML format or as SQL queries. For example, if you have a folder full of Log Parser SQL queries, you can import some or all of them into LPS’s library. Usually, the only thing you will need to do after the import is make a few adjustments. All LPS needs is the base SQL query and to swap out the filename references with ‘[LOGFILEPATH]’ and/or ‘[OUTFILEPATH]’ as discussed in detail in the PDF manual included with the tool (you can access it via LPS | Help | Documentation). Queries Remember that a well-written structured query makes all the difference between a successful query that returns the concise information you need vs. a subpar query which taxes your system, returns much more information than you actually need and in some cases crashes the application. The art of creating great SQL/Log Parser queries is outside the scope of this post, however all of the queries included with LPS have been written to achieve the most concise results while returning the fewest records. Knowing what you want and how to get it with the least number of rows returned is the key! Batch Jobs and Multithreading You’ll find that LPS in combination with Log Parser 2.2 is a very powerful tool. However, if all you could do was run a single query at a time and wait for the results, you probably wouldn’t be making near as much progress as you could be. In lieu of this LPS contains both batch jobs and multithreaded queries. A batch job is simply a collection of predefined queries that can all be executed with the press of a single button. From within the Batch Manager you can remove any single or all queries as well as execute them. You can also execute them by clicking the Run Multiple Queries button or the Execute button in the Batch Manager. Upon execution, LPS will prepare and execute each query in the batch. By default LPS will send ALL queries to Log Parser 2.2 as soon as each is prepared. This is where multithreading works in our favor. For example, if we have 50 queries setup as a batch job and execute the job, we’ll have 50 threads in the background all working with Log Parser simultaneously leaving the user free to work with other queries. As each job finishes the results are passed back to the grid or the CSV output based on the query type. Even in this scenario you can continue to work with other queries, search, modify and execute. As each query completes its thread is retired and its resources freed. These threads are managed very efficiently in the background so there should be no issue running multiple queries at once. Now what if we did want the queries in the batch to run concurrently for performance or other reasons? This functionality is already built-into LPS’s options. Just make the change in LPS | Options | Preferences by checking the ‘Process Batch Queries in Sequence’ checkbox. When checked, the first query in the batch is executed and the next query will not begin until the first one is complete. This process will continue until the last query in the batch has been executed. Automation In conjunction with batch jobs, automation allows unattended scheduled automation of batch jobs. For example we can create a scheduled task that will automatically run a chosen batch job which also operates on a separate set of custom folders. This process requires two components, a folder list file (.FLD) and a batch list file (.XML). We create these ahead of time from within LPS. For more details on how to do that, please refer to the manual. Charts Many queries that return data to the Result Grid can be charted using the built-in charting feature. The basic requirements for charts are the same as Log Parser 2.2, i.e. The first column in the grid may be any data type (string, number etc.) The second column must be some type of number (Integer, Double, Decimal), Strings are not allowed Keep the above requirements in mind when creating your own queries so that you will consciously write the query to include a number for column two. To generate a chart click the chart button after a query has completed. For #2 above, even if you forgot to do so, you can drag any numbered column and drop it in the second column after the fact. This way if you have multiple numbered columns, you can simply drag the one that you’re interested in, into second column and generate different charts from the same data. Again, for more details on charting feature, please refer to the manual. Keyboard Shortcuts/Commands There are multiple keyboard shortcuts built-in to LPS. You can view the list anytime while using LPS by clicking LPS | Help | Keyboard Shortcuts. The currently included shortcuts are as follows: Shortcut What it does CTRL+N Start a new query. CTRL+S Save active query in library or query tab depending on which has focus. CTRL+Q Open library window. CTRL+B Add selected query in library to batch. ALT+B Open Batch Manager. CTRL+B Add the selected queries to batch. CTRL+D Duplicates the current active query to a new tab. CTRL+ALT+E Open the error log if one exists. CTRL+E Export current selected query results to CSV. ALT+F Add selected query in library to the favorites list. CTRL+ALT+L Open the raw Library in the first available text editor. CTRL+F5 Reload the Library from disk. F5 Execute active query. F2 Edit name/description of currently selected query in the Library. F3 Display the list of IIS fields. Supported Input and Output types Log Parser 2.2 has the ability to query multiple types of logs. Since LPS is a work in progress, only the most used types are currently available. Additional input and output types will be added when possible in upcoming versions or updates. Supported Input Types Full support for W3SVC/IIS, CSV, HTTP Error and basic support for all built-in Log Parser 2.2 input formats. In addition, some custom written LPS formats such as Microsoft Exchange specific formats that are not available with the default Log Parser 2.2 install. Supported Output Types CSV and TXT are the currently supported output file types. Log Parser Studio - Quick Start Guide Want to skip all the details & just run some queries right now? Start here … The very first thing Log Parser Studio needs to know is where the log files are, and the default location that you would like any queries that export their results as CSV files to be saved. 1. Setup your default CSV output path: a. Go to LPS | Options | Preferences | Default Output Path. b. Browse to and select the folder you would like to use for exported results. c. Click Apply. d. Any queries that export CSV files will now be saved in this folder. NOTE: If you forget to set this path before you start the CSV files will be saved in %AppData%\Microsoft\Log Parser Studio by default but it is recommended that y ou move this to another location. 2. Tell LPS where the log files are by opening the Log File Manager. If you try to run a query before completing this step LPS will prompt and ask you to set the log path. Upon clicking OK on that prompt, you are presented with the Log File Manager. Click Add Folder to add a folder or Add File to add a single or multiple files. When adding a folder you still must select at least one file so LPS will know which type of log we are working with. When doing so, LPS will automatically turn this into a wildcard (*.xxx) Indicating that all matching logs in the folder will be searched. You can easily tell which folder or files are currently being searched by examining the status bar at the bottom-right of Log Parser Studio. To see the full path, roll your mouse over the status bar. NOTE: LPS and Log Parser handle multiple types of logs and objects that can be queried. It is important to remember that the type of log you are querying must match the query you are performing. In other words, when running a query that expects IIS logs, only IIS logs should be selected in the File Manager. Failure to do this (it’s easy to forget) will result errors or unexpected behavior will be returned when running the query. 3. Choose a query from the library and run it: a. Click the Library tab if it isn’t already selected. b. Choose a query in the list and double-click it. This will open the query in its own tab. c. Click the Run Single Query button to execute the query The query execution will begin in the background. Once the query has completed there are two possible outputs targets; the result grid in the top half of the query tab or a CSV file. Some queries return to the grid while other more memory intensive queries are saved to CSV. As a general rule queries that may return very large result sets are probably best served going to a CSV file for further processing in Excel. Once you have the results there are many features for working with those results. For more details, please refer to the manual. Have fun with Log Parser Studio! & always remember – There’s a query for that! Kary Wall Escalation Engineer Microsoft Exchange Support424KViews8likes37CommentsDemystifying Hybrid Free/Busy: Finding errors and troubleshooting
EDIT 9/19/2023: This blog post has received significant update. In this second part of the Demystifying Hybrid Free/Busy, we will cover troubleshooting of Hybrid Free/Busy scenarios, more specifically – how and where to find an actual error that will indicate where the problem is. Before venturing forth, please make sure that you have seen Part 1 of this demystifying series! Here is the graphics we posted in the previous post; use this as a reference for users that we will be referring to when troubleshooting: Do you really have a Free/Busy issue? Usually when a user creates a new meeting in Outlook on the web (OWA) or Outlook, clicks onScheduling Assistant, adds his or her colleague to the meeting, they try to see when the user is available to meet. If they see the hash marks\\\\\\\instead of seeing if the other user is free or busy, there is an issue. Here, we do seem to have a bunch of Free/Busy issues: You can often see an error message by hovering over hash marks, however we usually find that the error is not very specific. Instead, we would need to take slightly more advanced steps to diagnose the issues by checking things like theRemote Connectivity Analyzertool, Fiddler, F12 Network tab, Outlook logging or SARA tool. Where is the actual Free/Busy error message? First, we need to understand in which direction we have a lookup problem. Please seePart 1 for discussion of directionality. Sources of logs: Remote Connectivity Analyzer tool Outlook logging SARA tool OWA F12 Network Tab Fiddler – Outlook and OWA These steps are important for us to see the relevant message error for Free/Busy issues. Once we know the error message, it’s much easier to resolve the issue. Remote Connectivity Analyzer A few things to know about this tool: Source Mailbox: the user that will be requesting the free/busy information. This will be the user that is logged in Outlook or OWA and cannot see free/busy for other people. This is also called Requester or Organizer of the meeting. Authentication type for Source Mailbox: you will choose Modern Authentication Source Mailbox credentials: you will need to authenticate with the credentials of the Source Mailbox. The tool doesn’t support Basic Authentication for Exchange Online mailboxes because this is disabled in Exchange Online. While it is still used by Exchange On-premises environments, currently, if you select Basic Authentication for the on-premises source mailbox, the test will fail before doing the actual Free/Busy process. It works if your Exchange on-premises has enabled Modern Authentication for client protocols. In conclusion, Source Mailbox login needs to be using OAuth for this test to work, regardless of where it is hosted. Target Mailbox: the user that the Source Mailbox is requesting free/busy for. This is the Attendee of the meeting. The tool simulates Outlook’s way of querying Free/Busy. If you have a free/busy issue that is only happening in OWA but not in Outlook Desktop, then this test will likely not catch the error. To be able to perform the test, you must allow connectivity for the Remote Connectivity Analyzer tool’s IP addresses. These are part of the "Microsoft 365 Common and Office Online" ranges published in theOffice 365 URLs and IP address ranges.The IPs for the Remote Connectivity Analyzer are part of the range specified as "Allow Required" (currently ID 46 in the documentation). Check https://testconnectivity.microsoft.com/Pages/ChangeList.htm for any future changes. Note that you can only insert one Target Mailbox email address per test. If you have errors for multiple target mailboxes, run multiple tests, for each user. Connectivity Test Results: With these 3 buttons on the top right corner, you can expand all the results and save them as XML or HTML files. Usually, support people appreciate these files a lot, so please do upload them in your support case workspace. When you expand the results, there are 3 important checks: Determining where the source mailbox is hosted (cloud or not). If the Mailbox is hosted in cloud, you will see something like this:IsOffice365Mailbox=True. The mailbox is hosted in Office 365. <ASURL>https://outlook.office365.com/EWS/Exchange.asmx</ASURL> If the Mailbox is not hosted in cloud, you will see something like this:IsOffice365Mailbox=False. The mailbox isn't hosted in Office 365. Determining where the target mailbox is hosted (cloud or not). Test Autodiscover for the Target Mailbox SMTP to retrieve External EWS url. Quick tip: on your side, in Windows PoweShell, you can also use the following commands to see the External EWS url of an user based on the Autodiscover call to Office 365, replace what is in Email= with your actual email addresses. Invoke-RestMethod -Uri "https://outlook.office365.com/autodiscover/autodiscover.json?Email=CLOUDUSER@CONTOSO.COM&Protocol=EWS" Invoke-RestMethod -Uri "https://outlook.office365.com/autodiscover/autodiscover.json?Email=ONPREMUSER@CONTOSO.COM&Protocol=EWS" Performing the Free/Busy Lookup. This will be Success or Failed. If it failed, look under the Additional details to see the error message. If success, be happy, maybe the issue is resolved, or not be happy as it might be an intermittent issue (which is harder to troubleshoot) or a local issue only (happening in your specific network, machine, Outlook version). In my case, I see that I have a NoFreeBusyAccessException, given by the Exchange on-premises server HHE1601. OUTLOOK Note: The Modern Outlook clients log Free/Busy information in Outlook ETL files and you won’t be able to see the Free/Busy error in plain text here. This was possible with Outlook 2010 logs, back in the old days. But this method is still useful, because you can provide the Outlook ETL log containing the error to Microsoft Support to parse it for you and help you fix it also. If you want to see the error for yourself, check the Fiddler method. For the Outlook F/B error, we need to first enable Outlook logging and after this we will need to reproduce the issue (\\\\\\). After repro, we will collect the Outlook logs. Steps: Enable Outlook logging:Followthis KB articleand check the “Enable troubleshooting logging (this requires restarting Outlook)” option. Restart Outlook. Reproduce the issue for the non-working free/busy direction.Suppose Free/Busy direction not working is cloud to on-premises, you will be logged on as a cloud user (Source Mailbox), go to Calendar tab, New Meeting, Scheduling Assistant, add some on-premises users to a meeting until you see the hash marks (instead of Free/Busy information). You do not need to save or send a meeting request. Collect the Outlook-#####.etl log from%temp%\Outlook Loggingfolder (referencehere). You would need to send the ETL file to Microsoft Support to get it analyzed as we are parsing this log with an internal tool. You might not know this, but Hybrid free/busy support cases are free of charge! Of course, you can still use the other methods (fiddler for Outlook/OWA or browser for OWA) to see Free/Busy error yourself, however we (Support) might ask you additionally to get this log as well for a further dive into the Free/Busy errors. SARA I would also like to mention that there is a Free/Busy troubleshooter in Beta version, incorporated into SARA tool (Microsoft Support and Recovery Assistant for Office 365) which you can download it from here :https://diagnostics.outlook.com/#/ Open SARA and select Outlook scenario, click Next, then selectI’m having problems with my calendar, input email address and password of the source mailbox (cloud mailbox if direction not working is cloud > on-premises) and then selectI can’t see when someone is free or busy. Due to the underlying complexity of it all, this is not a completely reliable way of determining the cause of free/busy issues in Hybrid Deployments, but it is a good start when troubleshooting. This F/B test from SARA covers mostly cloud to cloud scenarios but I recommend it here because it does connectivity and additional checks on tenant, licensing and Autodiscover. And sometimes it shows the underlying Free/Busy error message. Here are some screenshots with the SARA process: After the Office 365 readiness checks, the tool will ask you for the email address of the Target Mailbox: In the failed results, expand the Support Message and User Message: OWA / Outlook on the web F12 Network Tab Cloud OWA F12 Network tab You need to login to OWA as the source mailbox, hit F12 (Developer Tools for browser) and select the Network Tab. You would then lookup Free/Busy for the target mailbox (reproduce the issue). If the source mailbox is hosted in Cloud, to look for the F/B here, you can find the Search Icon and type there “GetSchedule” or find the Filter Icon and type “graphql”, then look at Response or Preview tab to see the error message by expanding GetSchedule until you reach to the error. (click thumbnail to view larger) If the Source Mailbox is hosted in Exchange On-Premises, you would look after GetUserAvailabilityInternal: Fiddler –Outlook or OWA You would need to download and install Fiddler tool from the internet, enable HTTPS decryption in Fiddler and then reproduce the Free/Busy issue in Outlook or OWA or both. Fiddler - Exchange Online Source Mailbox logged in Outlook desktop. Look for “GetUserAvailability” calls and then on the right side, you have Request on the top and Response on the bottom. Switch to XML tabs for a nicer view. In the Request you will see the attendees’ email addresses and, in the Response, you will have ResponseMessage with ResponseClass=Error or ResponseClass=Success. Fiddler – Exchange Online Source Mailbox logged in OWA. In Fiddler, you can check in the Request pane, under Raw tab the ClientRequestID, you can for example search after this specific value in your on-premises Exchange server logs: IIS W3SVC2 logs, HTTPProxy EWS logs and EWS logs (more information on these logs, location and extracts, later in the article). Example here from a lab: ClientRequestID: {72741DFF-A6AC-402B-991B-C6B5D56B1422} Date: Mon, 11 Sep 2023 19:01:25 GMT If you are fan of SQL language, you can use a tool like Log Parser Studio and search through these logs, for example, here is a query on the ClientRequestID from earlier: SELECT DateTime, ClientRequestID, RequestID, UserAgent, SoapAction, ErrorCode, GenericErrors, GenericInfo, FileName FROM '[LOGFILEPATH]' WHERE ClientRequestID LIKE '%{72741DFF-A6AC-402B-991B-C6B5D56B1422}%' You can also use findstr.exe utility to look for the client request id or other keywords like the requester’s email address or “CrossForest”. Example of command: findstr.exe /I /S "{72741DFF-A6AC-402B-991B-C6B5D56B1422}" *.log When troubleshooting Free/Busy issues, the following on-premises logs can be very useful, especially for Cloud to On-Premises Free/Busy direction. IIS logs Default Web Site (DWS) Path: %SystemDrive%\inetpub\logs\LogFiles\W3SVC1 Path example: C:\inetpub\logs\LogFiles\W3SVC1 Extract of Autodiscover and EWS log entries with IOC Enabled in IIS W3SVC1 logs: Autodiscover – OAUTH (autodiscover.svc without /WSSecurity) 2016-01-06 17:45:27 10.0.0.5 POST /autodiscover/autodiscover.svc &CorrelationID=<empty>;&ClientId=QNFNHKEEKYENCJITQQ&cafeReqId=7972d1fc-a9d9-44c6-8851-480d3601cbd7; 443 S2S~00000002-0000-0ff1-ce00-000000000000 132.245.65.28 ASAutoDiscover/CrossForest/EmailDomain//15.01.0361.007 200 0 0 109 EWS – OAUTH (exchange.asmx without /WSSecurity) 2016-01-06 17:45:27 10.0.0.5 POST /ews/exchange.asmx &CorrelationID=<empty>;&ClientId=WSIVGUUAUWWRFACJBWDA&cafeReqId=6ce8864c-74a0-4ad2-a3dc-7b69e0415403; 443 <unverified>actas1(sip:joe@contoso.com|smtp:joe@contoso.com|upn:joe@contoso.com) 132.245.65.28 ASProxy/CrossForest/EmailDomain//15.01.0361.007 200 0 0 703 Example of EWS entry with Organization Relationship Enabled in IIS W3SVC1 logs:EWS – DAUTH (exchange.asmx with /WSSecurity) 2016-01-06 18:04:41 10.0.0.5 POST /ews/exchange.asmx/WSSecurity &CorrelationID=<empty>;&ClientId=VOMGJKAWURSVKOXQLBVA&cafeReqId=18fd3a2e-7b1c-4828-8943-6b20912e2e44; 443 - 132.245.65.28 ASProxy/CrossForest/EmailDomain//15.01.0361.007 200 0 0 296 IIS logs Exchange BackEnd (BE) Path: %SystemDrive%\inetpub\logs\LogFiles\W3SVC2 Path example: C:\inetpub\logs\LogFiles\W3SVC2 Example of EWS entry with Organization Relationship Enabled (DAUTH) in IIS W3SVC2 logs: 2016-01-06 18:04:41 fe80::f17f:beef:a5e3:7d3c%25 POST /ews/exchange.asmx/WSSecurity - 444 - fe80::f17f:beef:a5e3:7d3c%25 ASProxy/CrossForest/EmailDomain//15.01.0361.007 200 0 0 93 HTTPProxy logs for Autodiscover Path: %ExchangeInstallPath%Logging\HttpProxy\Autodiscover Path example: C:\Program Files\Microsoft\Exchange Server\V15\Logging\HttpProxy\Autodiscover Example of Autodiscover entry with Organization Relationship Enabled (DAUTH) 2016-01-06T18:05:20.552Z,bcdfbed5-f11f-4250-a616-e38cb475cd3f,15,0,1104,2,,Autodiscover,autodiscover.contoso.com,/autodiscover/autodiscover.svc /WSSecurity,,,false,,contoso.com,Smtp~joe@contoso.com,ASAutoDiscover/CrossForest/EmailDomain/ /15.01.0361.007,132.245.65.28,exch-2013,200,200,,POST,Proxy,exch-2013.contoso.com,15.00.1104.000,IntraForest,AnchorMailboxHeader-SMTP,[…],BeginRequest=2016-01-06T18:05:20.192Z;CorrelationID=<empty>;ProxyState-Run=None;FEAuth=BEVersion-1941996624;NewConnection=fe80::f17f:beef:a5e3:7d3c%25&0; HTTPProxy logs for EWS Path: %ExchangeInstallPath%Logging\HttpProxy\Ews Path example: C:\Program Files\Microsoft\Exchange Server\V15\Logging\HttpProxy\Ews Example of EWS entry with Organization Relationship Enabled (DAUTH): 2016-01-06T18:04:41.490Z,4757ab2c-8ccc-4d1a-ae39-0780ecc8eabb,15,0,1104,2,{02CD833F-18AB-413A-83CB-0E86F4DA5362},Ews,mail.contoso.com,/ews/exchange.asmx/WSSecurity,,,false,,contoso.com, Smtp~joe@contoso.com,ASProxy/CrossForest/EmailDomain//15.01.0361.007,132.245.65.28,exch-2013,200,200,,POST,Proxy,exch-2013.contoso.com,15.00.1104.000,IntraForest,AnchorMailboxHeader-SMTP,[…],BeginRequest=2016-01-06T18:04:41.380Z; EWS logs Path: %ExchangeInstallPath%Logging\Ews Path example: C:\Program Files\Microsoft\Exchange Server\V15\Logging\Ews Example of EWS entry with Organization Relationship Enabled (DAUTH): 2016-01-06T18:04:41.490Z,4757ab2c-8ccc-4d1a-ae39-0780ecc8eabb,15,0,1104,2,{02CD833F-18AB-413A-83CB-0E86F4DA5362}, External,true,jane@contoso.mail.onmicrosoft.com,, ASProxy/CrossForest/EmailDomain//15.01.0361.007,Target=None;Req=Exchange2012/Exchange2013; ,132.245.65.28,exch-2013,exch-2013.contoso.com,GetUserAvailability,200,12150,,,,,,ebd34d71ac7342c19d947d881db4ad55,f866c73e-6c91-475e-bdec-0428bdeaa423,PrimaryServer; Requester=jane@contoso.mail.onmicrosoft.com; Failures=0 Event Viewer Application logs on Exchange ServerReferenceshereandhere. Example of Event ID 4002 for MSExchange Availability: Log Name: Application Source: MSExchange Availability Event ID: 4002 Task Category: Availability Service Level: Error Description: Process 4568: ProxyWebRequest CrossSite from S-1-5-21-391720751-1508397712-925700815-508779 tohttps://hybrid.contoso.com/ews/exchange.asmxfailed. Caller SIDs: NetworkCredentials. The exception returned is Microsoft.Exchange.InfoWorker.Common.Availability.ProxyWebRequestProcessingException: System.Web.Services.Protocols.SoapException: You have exceeded the available concurrent connections for your account. Try again once your other requests have completed. at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) IIS tracing for the error code in the IIS logsReferencehere. Free/Busy errors and fixes Based on cumulative support team experience, we created a table (see the attachment to this post) with Free/Busy errors encountered so far and their possible resolutions. We cannot cover all possible scenarios and errors even though we have a good-sized list. This is meant to illustrate ways we can resolve specific errors and these suggestions might not work for you even if you have the same error. If you know the exact Free/Busy error that you get and checked configuration as discussed in part 1 of this series, this is already a tremendous progress, and this will help us resolve your issue faster. Of course, you can follow these suggestions on your own as most of the actions are harmless but if you don’t feel confident in troubleshooting on your own or you fear that actions are dangerous or irreversible, please contact us. Free/Busy Errors discussed in the attached document (FB_Errors_FixesV7): “An internal server error occurred. The operation failed” LID: 59916. 500 Internal Server error. "The remote user mailbox must specify the the explicit local mailbox in the header" "An error occurred when verifying security for the message" "Unable to connect to the remote server" “Autodiscover failed for email address <> with error ‘The request failed with HTTP status 404: Not Found’ ” “The request failed with HTTP status 401: Unauthorized - The user specified by the user-context in the token is ambiguous” LID: 43532 "An existing connection was forcibly closed by the remote host - An unexpected error occurred on a receive " "An existing connection was forcibly closed by the remote host - An unexpected error occurred on a send ” "Configuration information for forest/domain could not be found in Active Directory" "Proxy web request failed.,inner exception: The request failed with HTTP status 401: Unauthorized." "The response from the Autodiscover service at 'https://autodiscover/autodiscover.svc/WSSecurity' failed due to an error in user setting 'ExternalEwsUrl'. Error message: InvalidUser." LID: 33676 “The caller does not have access to free/busy data" LID: 47652 LID: 44348 “The request failed with HTTP status 403: Forbidden (The server denied the specified Uniform Resource Locator (URL). “ LID: 43532 “Unable to resolve e-mail address user@notes.domain.com to an Active Directory object” LID: 57660 “An error occurred when processing the security tokens in the message.” LID: 59916 “The cross-organization request for mailbox yyy@contoso.com is not allowed because the requester is from a different organization” LID: 39660 “The request failed with HTTP status 401: Unauthorized - Microsoft.Exchange.Security.OAuth.OAuth TokenRequestFailedException: Missing signing certificate “ “The application is missing a linked account for RBAC roles, or the linked account has no RBAC role assignments, or the calling users account is logon disabled” “The entered and stored passwords do not match“ “The password has to be changed.” “The password for the account has expired” or “Provision is needed before federated account can be logged in” “The request timed out” “The specified member name is either invalid or empty” “The result set contains too many calendar entries” LID: 54796 “The request failed with HTTP status 401: Unauthorized - The token has an invalid signature.” “The request failed with HTTP status 401: Unauthorized - Client assertion contains an invalid signature. [Reason - The key was not found., Thumbprint of key used by client: '<>’ “ “Proxy web request failed., inner exception: Response is not well-formed XML “ “Failed to communicate with https://login.microsoftonline.com/extSTS.srf., inner exception: Unable to connect to the remote server” “Autodiscover failed for E-Mail Address <> with error System.Net.WebException: The remote name could not be resolved: '<>'” “Failed to get ASURL. Error 8004010F” “Proxy web request failed. , inner exception: System.Net.WebException: The request failed with the error message: -- <head><title>Object moved” “The request was aborted: Could not create SSL/TLS secure channel.” “The user specified by the user-context in the token does not exist.";error_category="invalid_user“ “The hostname component of the audience claim value 'https://<>’ is invalid";error_category="invalid_resource“ “Proxy web request failed. , inner exception: System.Net.WebException: The request failed with HTTP status 503: Service Unavailable” “Proxy web request failed. , inner exception: System.Net.WebException: The request failed with HTTP status 504: Gateway Timeout.” Thanks to all that contributed to this content: Ray Fong, Nino Bilic, Tim Heeney, Greg Taylor and Brian Day. Mirela Buruiana192KViews6likes88Comments/Mixed-ing it up: Multipart/Mixed Messages and You
Greetings, Exchange Administrators! In today’s edition of “and You”, we’ll be covering Exchange’s handing of messages generated by iPhones, iPads, and Macintosh Mail clients. Specifically, we’re going to cover what /mixed content body messages are, how Exchange handles them now, and how we handle them down the line. Before we can dive into the content, you first have to understand how internet messages are structured, and that means learning a little bit about how Exchange stores messages, and quite a bit about MIME . Not the mute freaks – Multipurpose Internet Mail Extensions, also known as “The Mime not everyone hates.” Exchange, Messages, and LOL Cats Exchange stores messages as a series of properties, where each property has a name and a value. For instance, PR_SUBJECT is the subject property, and “Test Message” is the value. Messages in Exchange have one body with multiple forms of representing it (HTML, Rich Text and Plain text). The HTML view of the body looks like so: This is a cat. Do Not Want The RTF (rich text format) view of the same body is also capable of containing both formatting and images, and so would look like: This is a cat. <Mentally insert picture of cat here – it’s only eight lines up, so honestly, you can do it. > Do Not Want The plain text version of the body is composed of plain text, a fact that should be obvious based on its name. A good way of simulating plain text body generation is to paste your content into notepad. If it survives the paste into notepad, it will be part of the plain text. Sadly, the cat picture does not: This is a cat. Do Not Want Messages in Exchange can have multiple attachments. This is good, because even if Exchange is forced to generate a plain text version of the body, the cat picture can come along so that should you decide to click it, you can view a picture of a cat which does not want something. Key points for non technical and allergic to cats people: For Exchange, messages can have one body and can have many attachments. And in this corner, MIME MIME is a plain text format for email messages. MIME messages are divided into “parts”, each of which might have content or even child parts, like a series of Russian dolls. Each MIME part (even the root part, or the message itself) has a header called Content-Type, which describes the type of content in the part. Content type is divided into major parts and minor parts, separated by a slash. For example, consider the content-types multipart/alternative or multipart/mixed. Every part has a type, even going so far as to define a Miranda type for parts which can't afford to assign one (text/plain). Sometimes the types are quite helpful at understanding the meaning of the content, sometimes not so much. For instance, Content-Type: Application/PDF – that one means that it is Adobe’s Portable Document Format. On the other hand, Content-Type: application/octet means “I can’t tell what this is. Here’s a binary blob for your troubles. Hopefully you can figure out what to do with it” Multipart/ is a general type, meaning that this MIME part may contain many child MIME parts. The sub type of the part (the part after the slash) tells us more about the child parts, and in this case, how they are related to each other. Now we will take a closer look at some of the multipart sub types to see where things can go wrong. Relativity First off we’re going to look at Multipart/related, (also called a “related” body part). Related, in this case, means that the sub MIME parts are actually related to each other – in other words, that give the following MIME structure: 1. Multipart/related 1.1. Text/HTML 1.2. Image/Gif That 1.1 and 1.2 are not meant to be interpreted as “separate” parts – they have meaning as one. In this case the html contains image links to the 1.2 image (our friend the cat). Key point for non technical and allergic to cats people: Multipart/Related means “We belong together.” Alternatives Multipart/alternative means that each child of this part is a different representation of the same data. They are “Alternative” versions of each other. The intention is that a client picks the type that it can best display and displays that one. So given this mime structure: 1. Multipart/alternative 1.1.1. Text/Plain 1.1.2. Text/Html 1.1.3. Application/Pdf The client doesn’t have to show the text/plain part as the body. No, if the client knows how to display a text/html body, it is free to do that. So multipart/alternative is a way of grouping a number of different formats of the same data together and letting the client decide which one it shows best – it’s like kid’s beauty pageant, except that instead of the ladies from the rotary club, you have the email client as the judge. Key point for non technical and allergic to cats people: Multipart/Alternative means “Pick the one you like best.” Mixed Up Multipart/Mixed, according to RFC 1521, means that the parts are completely independent of each other (not related to each other) but that their order matters. What is the expected behavior? “Clients usually display the parts one after the other.” This, however, brings into play another parameter on the MIME part – Content-Disposition. This parameter has a couple of normal values – Inline and Attachment. Attachment is easy to understand – in the context of Exchange, it means “Show me in the well, that I may be blocked by Outlook from being saved or opened.” Inline, on the other hand, we handle differently. Remember that whole “messages have one body, and maybe many attachments” thing? Keep that in mind while we look at how our Cat message looks like with a /mixed body: 1. Multipart/Mixed 1.1. Text/Html - Inline 1.2. Application/Gif - Inline 1.3. Text/Html - Inline And the intention among clients that generate this is that the receiving client should display the text/html part first and then glob on the image to the end of it, and then the rest of the body. There’s no limit to multipart/madness, you can combine them (and dispositions) into nigh endless combinations. For example: 2. Multipart/Mixed 2.1. Text/Html -Inline 2.2. Image/Gif -Inline 2.3. Text/Html -Inline 2.4. Text/Plain -Inline Means “Show 2.1, followed by the image from 2.1 then the html from 2.3 and then the text from 2.4. Do it NOW.” 3. Multipart/Mixed 3.1. Text/HTML -Inline 3.2. Image/Gif –Attachment 3.3. Text/Html –Inline 3.4. Text/Plain –Attachment Means “Show the text from 2.1, NOT the attachment from 3.2 unless someone does something, the text from 3.3, but NOT the text in 3.4 (unless they do something like click an attachment in the well). The problem, of course, comes from Exchange’s original definition of a message – one body (with multiple representations), maybe many attachments. Key point for non technical and allergic to cats people: Multipart/Mixed can mean “Maybe show all of these, in the order listed.” Combo #5 MIME Types are not exclusive. I can combine Multipart/Mixed, Multipart/Alternative and Multipart/Related into a single message, and actually have a meaning 1. Multipart/Mixed 1.1. Multipart/Alternative 1.1.1.1. Text/Plain 1.1.1.2. Multipart/Related 1.1.1.2.1.1. Text/HTML 1.1.1.2.1.2. Image/Gif - inline 1.2. Image/Jpeg - attachment Yes, this structure is legal. And it is meaningful. To understand this you unwrap in order, one level at a time. Multi Mixed – this message is different parts, put together, and the order matters. Multipart/Alternative- I have two children, pick the prettier one and show it off. Text/Plain – I am a blob of plain text Multipart/Related – My children are bits and pieces of each other Text/HTML – Pretty, Pretty Text. Image/Gif – I am a picture of a cat, referenced by pretty, pretty text, I hope. Image/Jpeg – I’m an attachment (in case you couldn’t see the picture of the cat above). Exchange has always dealt well with multipart/alternative bodies, picking the one which we can best support and promoting it. We deal well with multipart/related as well – not every attachment on a message is visible – attachments have a disposition, which is either not set, inline or attachment. Setting neither indeterminate – the client does what the client does (and good luck establishing an algorithm that works for everyone). On the other hand, messages with /mixed content bodies where multiple parts are inline, those do not work so well. Blender’d Messages In the case of /mixed bodies, there are multiple MIME parts which are meant to combine together like Japanese robots to form Voltron, or an image of a cat and some text. Today, if you receive such a message, we do the best we can with it (which is pretty dis-satisfying): We pick the first “body type part” – aka, a text/<something> part, and that one becomes the body as seen by Outlook. All of the rest of them, those are attachments and we shove them into the attachment well. Oh, sure they might have a disposition of inline, but because the most common usage of inline is in HTML, we actually check, and anything that isn’t referenced by a link from the body, we won’t be fooled by. Into the well it goes. From an Outlook perspective you get the first part of the message, then two attachments, one of which is the picture and the other is the trailing text. You are welcome to open the attachments in order, and combine them in your head to form a message, but once you get more than a couple parts it isn’t reasonable. Exchange has never supported “proper” display of /mixed body messages in OWA or Outlook, until now. Blended Messages Starting with Exchange 2007 service pack 3 roll up 3 (E12SP3RU3) and Exchange 2010 service pack 1 rollup 4 (E14SP1RU4) are a set of changes to how Exchange handles /mixed body messages. We think that in general your users will enjoy the less mangled nature of these messages, but if I’ve learned anything from fourteen years of working on Exchange, it’s that “different == angry”. People get used to our behavior, so even when it’s wrong (or incomplete) they expect the same behavior, and come to rely on it. Consider this is your fair warning that this change is coming, some details on how it works, where it works, and where it doesn’t. We're adding support for Exchange to combine multiple body parts into a single, aggregate body. The short of this is that broken up messages should show combined together, and readable in OWA and Outlook. There are, however, limitations to this. First off, right now this will only work for message generated by Apple iPods, iPads, or Apple Mail clients. This isn’t an accident – we developed the rules used to combine these bodies using test data from our counterparts at Apple, and while we handle messages by them well, the internet is wide and wondrous, and anyone can write messages with multipart/mixed bodies. For now this is restricted to clients we have good test data on, good rules for and a good way to identify. To create the aggregate body, we check each MIME part in the /mixed body. If a MIME part has a disposition of Attachment, it goes to the attachment well. I am not going to argue with a client that specifies that it is an attachment. If a body part has a disposition of inline or not set, if it is a plain text or html body part, we add it to the aggregate body. If the body part is an image which can be displayed inline, we add a link to it in the aggregate body. If the body part is not text, or is an image we can't display inline, it goes to the well. How do I know if this breaks me? If you're the owner of an application which is used to sending in /mixed content messages with MIME parts that you rely on Exchange treating as well attachments, and you send them from an Apple platform, and you haven’t been setting a content disposition, and the parts are text or image types (and you are setting content type), then you need to add a Content-Disposition to the MIME parts you want to be attachments, and set them to Attachment. If you're a normal consumer wondering why messages with images or signatures got split into pieces, you don’t need to do anything. Conclusions Today we’ve covered different MIME structures, different representations of bodies, dispositions, types, and a host of other things, but the hard boiled summary is that mail between Apple clients and Exchange clients will be handled better. The best case scenario isn’t that your users call up and say “Wow, I’m really impressed that the messages I got from John on his Mac aren’t chopped up into a dozen pieces anymore.” The best case scenario is that things just work. No one has to call anyone, because you neither notice nor care what platform the message came from, or what format it was sent in. You can see your email, your signatures, and yes, your LOL cats. Enjoy! Epilogue: How things went wrong Already I’ve read (and responded to) reports on the Exchange Update Forums that this new code introduced a new problem. PDF attachments from Mac clients which declare their disposition to be Inline aren't visible in the attachment well. Visible in Outlook Web Access, yes, but MAPI, no. So how on earth did this happen? And how did we miss this bug? (“Do you do any quality control?” as one person asked.) The answer to this is yes, we do. But rather than just say that, let’s look at the journey we took to get this fix included in a rollup, what the problem is, how we missed it, and what we're doing about it. This fix begins with a conference call between me, a few team mates, and our counterparts at Apple, in which someone asked “Well, can you guys do anything to actually support this style of message body?” I spent the next few days researching what it would take to build a synthetic body out of the parts and eventually concluded that it would be possible. With that we began the discussions of “Why now?” and “What is it going to take?” That phase took quite a while. The core problem was that we were well aware that producing a new type of body was going to require a large investment in testing. Eventually we concluded that we could and would do it. And we did. We wrote the code that supports this type of body, we wrote the code that tests it. We have a huge number of tests that test various types and formats of bodies. We had only the new ones to test the new body. Line for line there's more code to test this than there's to support it. So the fix was done, the code was tested, but we still weren’t ready to check in the code. Instead, we took a much more cautious approach. The fix was available as an interim update and that interim update was tightly controlled since we wanted it to go to customers who would be putting it into daily use. After a month or so we loosened it up and the fix went out to four more customers, eventually being in play at around 12 customers for about three months. After three months of field deployment with positive results, we decided to schedule it for a rollup (which should've been RU2, but wound up being RU3). But there’s a problem. You see, Exchange 2007 and 2010 don’t pay much attention to Content-Disposition, because for years inline has been a great way to make sure the attachment is essentially invisible (an inline attachment has its hidden property set to true, since it is displayed inline with body content). And the test code missed this case – it’s an inline attachment which can't be displayed inline by Exchange, but in a message which contains multiple body parts which can be merged to build the synthetic body. Unfortunately neither our in-house nor field testing encountered this. The bug is that in processing these messages, we honor attachment disposition in a case where we should not. Any attachment which can't be displayed inline should never be allowed to become part of the synthetic body. We know of two types of attachments – TIFF and PDF, which can be displayed inline on Mac clients but not on Windows clients. The fix for these two also fixes any other types where the client might render inline but we can't. How did we miss this? We went back over our test code and test data and said “It contains PDFs, and validates the attachment well status.” It does indeed (and TIFFs as well). It also doesn't ever attempt to create a PDF attachment and render it inline in the body. That’s the missed case which I certainly wish we had found before it ever hit any customers. So to resolve this we're creating an interim update, and we’ll be rolling it into the main branch to prevent further incidents of this. Over time I expect that we'll expand the number of mailers we produce synthetic bodies for. For now, I think this experience has validated that we should keep it limited and expand support where we can closely test. I remain convinced that improving interop between Mac clients and Exchange is a good idea. So there we have it: The problem, the fix, the problem with the fix, and the fix for the problem with the fix. We now return to your regularly scheduled blog. Jason Nelson47KViews0likes20CommentsLog Parser Studio 2.0 is now available
Since the initial release of Log Parser Studio (LPS) there have been over 30,000 downloads and thousands of customers use the tool on a daily basis. In Exchange support many of our engineers use the tool to solve real world issues every day and in turn share with our customers, empowering them to solve the same issues themselves moving forward. LPS is still an active work in progress; based on both engineer and customer feedback many improvements have been made with multiple features added during the last year. Below is a short list of new features: Improved import/export functionality For those who create their own queries this is a real time-saver. We can now import from multiple XML files simultaneously only choosing the queries we wish to import from multiple query libraries or XML files. Search Query Results The existing feature allowing searching of queries in the library is now context aware meaning if you have a completed query in the query window, the search option searches that query. If you are in the library it searches the library and so on. This allows drilling down into existing query results without having to run a new query if all you want to do is narrow down existing result sets. Input/Output Format Support All LP 2.2 Input and Output formats contain preliminary support in LPS. Each format has its own property window containing all known LP 2.2 settings which can be modified to your liking. Exchange Extensible Logging Support Custom parser support was added for most all Exchange logs. These are covered by the EEL and EELX log formats included in LPS which cover Exchange logs from Exchange 2003 through Exchange 2013. Query Logging I can't tell you how many times myself or another engineer spent lots of time creating the perfect query for a particular issue we were troubleshooting, forgetting to save the query in the heat of the moment and losing all that work. No longer! We now have the capability to log every query that is executed to a text file (Query.log). What makes this so valuable is if you ran it, you can retrieve it. Queries There are now over 170 queries in the library including new sample queries for Exchange 2013. PowerShell Export You can now export any query as a standalone PowerShell script. The only requirement of course is that Log Parser 2.2 is installed on the machine you run it on but LPS is not required. There are some limitations but you can essentially use LPS as a query editor/test bed for PowerShell scripts that run Log Parser queries for you! Query Cancellation The ability to submit a request to cancel a running query has been added which will allow you to cancel a running query in many cases. Keyboard Shortcuts There are now 23 Keyboard shortcuts. Be sure to check these out as they will save you lots of time. To display the short cuts use CTRL+K or Help > Keyboard Shortcuts. There are literally hundreds of improvements and features; far too many to list here so be sure and check out our blog series with existing and upcoming tutorials, deep dives and more. If you are installing LPS for the first time you'll surely want to review the getting started series: Getting started with Log Parser Studio Getting started with Log Parser Studio, part 2 Getting started with Log Parser Studio, part 3 If you are already familiar with LPS and are installing this latest version, you'll want to check out the upgrade blog post here: Log Parser Studio: upgrading from v1 to v2 Additional LPS articles can be found here: http://blogs.technet.com/b/karywa/ LPS doesn't require an install so just extract to the folder of your choice and run LPS.EXE. If you have the previous version of LPS and you have added your own custom queries to the library, be sure to export those queries as a backup before running the newest version. See the "Upgrading to LPS V2" blog post above when upgrading. Kary Wall569KViews4likes18CommentsPrevent archiving of items in a default folder in Exchange 2010
In Exchange 2010, you can use Retention Policies to manage message retention. Retention Policies consist of delete tags, i.e. retention tags with either Delete and Allow Recovery or Permanently Delete actions, or archive tags, i.e. retention tags with the Move To Archive action, which move items to the user's archive mailbox. Depending on how they're applied to mailbox items, retention tags are categorized as the following three types: Default Policy Tags (DPTs), which apply to untagged items in the mailbox – untagged items being items that don't have a retention tag applied directly or by inheritance from parent folder. You can create three types of DPT s: an archive DPT, a delete DPT and a DPT for voicemail messages. Retention Policy Tags (RPTs), which are retention tags with a delete action, created for default folders such as Inbox and Deleted Items. Not all default folders are supported. You can find a table showing the default folders supported for RPT s in Understanding Retention Tags and Retention Policies. Notably, Calendar, Tasks and Contacts folders aren't supported 1 . Personal Tags, which are retention tags that users can apply to items and folders in Outlook 2010 and Outlook Web App. Personal tags can either be delete tags or archive tags. They're surfaced in Outlook 2010 and OWA as Retention policies and Archive policies. To deploy retention tags, you add them to a retention policy and apply the policy to mailbox users. In Exchange 2010 SP1, we added support for the Notes folder. In Exchange 2010 RTM, items in the Notes folder aren't processed. After you upgrade to SP1, if the user's retention policy doesn't have a RPT for the Notes folder, the DPT from the user's policy will apply to items in that folder. In existing deployments, your users may not be used to their notes being moved or deleted. To prevent the DPT from being applied to a default folder, you can create a disabled RPT for that folder (or disable any existing RPT for that folder). The Managed Folder Assistant, a mailbox assistant that processes mailbox items and applies retention policies, does not apply the retention action of a disabled tag. Since the item/folder still has a tag, it's not considered untagged and the DPT isn't applied to it. Figure 1: Create a disabled Retention Policy Tag for the Notes default folder to prevent the Default Policy Tag from being applied to items in that folder Note: You can create a disabled RPT for any supported default folder. Why are items in the Notes folder still archived? If you create a disabled RPT for the Notes folder, you'll see items in that folder are not deleted, but they do continue to be moved to the archive! Why does this happen? How do you prevent it? It's important to understand that: A retention policy can have a DPT to archive items (using the Move to Archive retention action) and a DPT to delete items (using the Delete and Allow Recovery or Permanently Delete retention actions). Both apply to untagged items. The move and delete actions are exclusive of each other. Mailbox folders and messages can have both types of tags applied - an archive tag and a delete tag. It's not an either/or proposition. If you create a disabled RPT for the Notes folder to not delete items, the archive DPT for the mailbox would still apply and move items. When it comes to archiving, there's only one archive policy that administrators can enforce – the DPT with 'Move to archive' action. You can't create a RPT with the 'Move to archive' action. This rules out using the disabled RPT approach to prevent items from being moved. How do you prevent items in a default folder from being archived? There's no admin-controlled way to prevent items in default folders from being archived 2 , short of removing the archive DPT from a retention policy. However, removing the archive DPT would result in messages not moving to archive automatically unless the user applies a personal tag to messages or folders. The workaround is to have users apply the Personal never move to archive personal tag(displayed as Never under Archive Policy in Outlook/ OWA ) to a default folder. The tag is included in the Default Archive and Retention Policy created by Exchange Setup. You can also add this tag to any Retention Policies you create. Figure 2: Users can apply the Never archive policy to a default folder to prevent items in that folder from being archived 1 Support for Calendar and Notes retention tags was added in Exchange 2010 SP2 RU4. 2 You can apply a disabled move tag to a folder in user's mailbox using EWS code/script. For details, see Using Exchange Web Services to Apply a Personal Tag to a Custom Folder. Applying a disabled archive policy to the Notes default folder You can't use Outlook 2010 or Outlook 2013 to apply an archive policy to the Notes default folder or individual notes items. If your users want to preven Notes items from being moved, they must apply a disabled move tag to the Notes folder using OWA . Figure 3: Apply Personal never move to archive policy to the Notes folder in Outlook Web App in Exchange 2013. The Exchange 2010 Outlook Web App UI differs slightly - it lists archive and retention policies separately. See a screenshot here. Bharat Suneja Updates 1/23/2013: In Exchange 2010 SP2 RU4, we added Calendar and Tasks retention tag support. You can prevent these from being moved or deleted by creating registry values. See Calendar and Tasks Retention Tag Support in Exchange 2010 SP2 RU4. 6/18/2013: Added screenshot - Applying disabled move tag to Notes folder in OWA and link to Using Exchange Web Services to Apply a Personal Tag to a Custom Folder.79KViews0likes8CommentsStore Driver Fault Isolation Improvements in Exchange 2010 SP1
Background The Exchange Store Driver is a core transport component which lives both on the Mailbox server role (as the mail submission service) and the Hub server role. It is responsible for: Retrieving messages from the mailbox server that have been submitted by end-users and submitting those to the Hub transport role for categorization and routing. Delivering messages to the appropriate mailbox server based on the location of the recipients mailbox. Extensibility platform for both mail submission & delivery. Store Driver currently hosts a number of agents that extend the functionality of Exchange. Examples include such agents as Inbox Rules, Conversations, meeting forward notifications, etc. Exchange 2010 is currently being utilized in Live@EDU, as well as the upcoming Office 365. As you can probably imagine, the Exchange servers that run in those datacenters are loaded and pushed harder than almost any other Exchange server imaginable. Prior to SP 1, there were several problems that were encountered with mail delivery to the Exchange mailbox store. In particular, there was a need to make sure that a handful of recipients did not starve the rest of the mail delivery system. While many of you may not have noticed this problem, Microsoft has seen many of these types of cases over the years; often isolated to a single event like an inadvertent public folder replication storm. This was despite the message throttling that was already available. Transport roles have also had functionality to avoid resource starvation known as Back Pressure, but this was not designed to protect the system from messages that were already in the Local Delivery queue. Changes in SP1 In order to further protect both the Mailbox servers and Hub servers from resource starvation, new thread limits were introduced in SP1: Key Description Scenario Error in Connectivity Log: <add key=”RecipientThreadLimit” value=”1”/> Limit beyond which no more threads can be allocated to the recipient for delivery. Note: If this is increased, you should increase MaxMailboxDeliveryPerMdbConnections as well, so that slow or hung deliveries to a single recipient will not block delivery for the entire MDB. Flood of messages to a single Mailbox or a performance problem associated with a single mailbox, has minimal impact on delivery to the rest of the Mailboxes in the database. Throttled delivery for recipient <recipient> due to concurrency limit <limit> <add key=”MaxMailboxDeliveryPerMdbConnections” value=”2”/> The maximum number of concurrent connections to a single “healthy” Mailbox Database. Database health is determined by the Health Monitor API and recorded in the connectivity logs as a value between -1, 0-100. 100 being healthy. Connections hang to a single problematic database have minimal impact on delivery of other queued messages Throttled Delivery due to server limit for <server FQDN> with threshold Note: These keys are not present in the EdgeTransport.exe.config file by default. Is it possible to have too much protection? Unfortunately, there are two scenarios after applying SP1 where we are seeing customers with messages backing up in the queue. The temporary error message is: 432 4.3.2 STOREDRV.Deliver; recipient thread limit exceeded As you can probably guess, the two scenarios are: Journaling Public Folders In both cases, the deliveries are occurring to a single recipient (or very small number of recipients). This is likely to occur during heavy mail flow. The screen shot below was taken from a lab server while reproducing the issue: Figure 1: Messages backed up in mailbox delivery queue due to recipient thread limits being exceeded (click here for larger screenshot) You can see a historical history of 4.3.2 events in connectivity logs on your Hub Transport servers (in the \Program Files\Microsoft\Exchange Server\V14\TransportRoles\Logs\Connectivity\CONNECTLOGxxxxxxxx-x.LOG), like: #Software: Microsoft Exchange Server #Version: 14.0.0.0 #Log-type: Transport Connectivity Log #Date: 2011-01-12T00:00:00.775Z #Fields: date-time,session,source,Destination,direction,description 2011-01-12T00:00:00.775Z,08CD7F1200CBDBD0,MapiSubmission,5f24e416-c380-41b5-bfe0-37b6f1091f49,>,"Failed; HResult: 1140850693; DiagnosticInfo: Stage:LoadItem, SmtpResponse:432-4.3.2 STOREDRV; mailbox server is too busy" 2011-01-12T00:00:00.775Z,08CD7F1200CBDBD0,MapiSubmission,5f24e416-c380-41b5-bfe0-37b6f1091f49,-,RegularSubmissions: 0 ShadowSubmissions: 0 Bytes: 0 Recipients: 0 Failures: 1 ReachedLimit: False Idle: False 2011-01-12T00:00:05.792Z,08CD7F1200CBDBD1,MapiSubmission,5f24e416-c380-41b5-bfe0-37b6f1091f49,+,Win2k8R2Ex14.dom2k8r2ex14.lab In some cases, simply leaving the servers alone should cause the queues to slowly drain. In other cases, it may be necessary to take further action because the problem has persisted for a while or because it isn’t part of a one-time event like a Public Folder replication storm. Alright, I understand. Now how do I fix my situation? Like every other throttling & performance related feature that has ever been in Exchange, the solution isn’t exactly straight forward. For starters, we’ve seen some level of success simply incrementing both values up one, as follows: <add key="RecipientThreadLimit" value="2" /> <add key="MaxMailboxDeliveryPerMdbConnections" value="3" /> In fact, there has been en ough success with this that we’re considering changing the defaults in a future rollup or service pack. Of course, when we do, the new defaults will only apply to those who haven’t already modified the values. Although it has not yet been released, KB 2491972 will discuss the change when the time comes. So if +1 is better, then why not +2 or more? Well, the problem is that all Exchange servers will ultimately be bound by some hardware resource. You don’t want to introduce thrashing or resource starvation due to a public folder or journaling server event that now impacts all users as well. In addition, there are other limits that control the maximum number of threads that can service these types of deliveries. In short, we are not recommending going above 4 for MaxMailboxDeliveryPerMdbConnections, and RecipientThreadLimit should always be at least one fewer. In an extreme case, if you are in a very busy environment with dedicated journaling mailbox servers, it may be worth considering having dedicated hub transport servers to go along with that. Of course, they can be virtualized or multi-role boxes, but in order to be isolated, the whole bunch will need to be in a dedicated site. Then, you would be able to increase these limits without risking delivery to other mailboxes. Of course, this is an extreme example. For the rest of us, it may be as simple as trying the +1 approach while carefully monitoring the hub & mailbox servers. Scott Landry, Steve Schiemann, Frank Brown and Jason Nelson85KViews0likes7Comments