deployment
51 TopicsConfigure certificate-based authentication for Exchange ActiveSync
Update: Exchange Server 2013 Cumulative Update 5 and later supports certificate-based authentication with ActiveSync. Note: For official documentation on this subject, please go to this page on TechNet. In previous posts, we have discussed certificate based authentication (CBA) for Outlook Web App, and Greg Taylor has covered publishing Outlook Web App and Exchange ActiveSync (EAS) with certificate based authentication using ForeFront TMG in this whitepaper. Certificate based authentication for OWA only can also be accomplished using ForeFront Unified Access Gateway. In this post, we will discuss how to configure CBA for EAS for Exchange 2010 in deployments without TMG or UAG . To recap some of the common questions administrators and IT decision-makers have regarding CBA: What is certificate based authentication? CBA uses a user certificate to authenticate the user/client (in this case, to access EAS ). The certificate is used in place of the user entering credentials into their device. What certificate based authentication is not: By itself, CBA is not two-factor authentication. Two-factor authentication is authentication based on something you have plus something you know. CBA is only based on something you have. However, when combined with an Exchange ActiveSync policy that requires a device PIN, it could be considered two-factor authentication. Why would I want certificate based authentication? By deploying certificate based authentication, administrators gain more control over who can use EAS . If users are required to obtain a certificate for EAS access, and the administrator controls certificate issuance, access control is assured. Another advantage: Because we're not using the password for authentication, password changes don't impact device access. There will be no interruption in service for EAS users when the they change their password. Things to remember: There will be added administration overhead. You will either need to stand up your own internal Public Key Infrastructure (PKI) using Active Directory Certificate Services (AD CS, formerly Windows Server Certificate Services) or a 3rd-party PKI solution, or you will have to purchase certificates for your EAS users from a public certification authority (CA). This will not be a one-time added overhead. Certificates expire, and when a user’s certificate expires, they will need a new one, requiring either time invested in getting the user a new certificate, or budget invested in purchasing one. Prerequisites: You need access to a CA for client certificates. This can be a public CA solution, individual certificates from a vendor, or an Active Directory Certificate Services solution. Regardless, the following requirements must be met: The user certificate must be issued for client authentication. The default User template from an AD CS server will work in this scenario. The User Principal Name (UPN) for each user account must match the Subject Name field in the user's certificate. All servers must trust the entire CA trust chain. This chain includes the root CA certificate and any intermediate CA certificates. These certificates should be installed on all servers that may require them, to include (but not limited to) ISA/TMG/UAG server(s) and the Client Access Server (CAS). The root CA certificate must be in the Trusted Root Certification Authorities store, and any intermediate CA certificates in the intermediate store on all of these systems. The root CA certificate, and intermediate CA certificates must also be installed on the EAS device. The user’s certificate must be associated with the user’s account in Active Directory Client Access Server Configuration To configure the Client Access server to enforce CBA : 1. Verify if Certificate Mapping Authentication is installed on the server. Right click on Computer in the start menu and choose Manage. Expand Roles and click on Web Server (IIS) Scroll down to the Role Services section. Under the Security section you should see Client Certificate Mapping Authentication installed. If you don't see Client Certificate Mapping Authentication installed, click add Role Services > (scroll) Security and select Client Certificate Mapping Authentication and then click Install. Reboot your server. 2. Enable Active Directory Client Certificate Authentication for the server root in IIS. Start IIS Manager Click on the Server Name. Double click Authenticationin the middle pane. Right click on Active Directory Client Certificate Authentication and select Enable When this is complete, restart the IIS Admin service from the Services console (or use Restart-Service IISAdmin"from the Shell). Important: IISreset does not pick up the changes properly. You must restart this service. 3. Require Client Certificates in Exchange management console. In the Exchange Management Console, expand Server Configuration and then select the Client Access server you want to configure On the Exchange ActiveSync tab, right click on the Microsoft-Server-ActiveSync directory and choose Properties. On the Authenticationtab: Clear the Basic authentication (password is sent in clear text) checkbox Select Require client certificates 4. Modify the ActiveSync XML configuration file to allow Client Certificate Mapping Authentication. In IIS manager open the configuration editor under the Microsoft-Server-ActiveSync directory. Browse the option for clientCertificateMappingAuth and set the value to True Once this is configured, all that's left to do is client configuration. Client Configuration You'll need to get the user’s certificate to the device. This will be accomplished in different ways for different devices. Some possibilities include: Make the Trusted Root Certification Authority (Root CA) certificate available on a web site or other means of distribution. Make the user’s certificate, including the private key, available on a website or other means of distribution. Caution: Use appropriate security measures to ensure that only the user who owns the certificate is able to access it from the device. If the device supports external storage, you can place the certificate on a memory card or other external storage device and install it from the card. Some devices have a certificate manager available for a host operating system. You can plug the device into your computer, run the certificate manager and install the certificate. Once the certificate is on the device, the user can configure the Exchange ActiveSync client (usually a mail app) on the device. When configuring EAS for the first time, users will be required to enter their credentials. When the device communicates with the Client Access Server for the first time, users will be prompted to select their certificate. After this is configured, if users check the account properties, they'll see a message similar to the following: Microsoft Exchange uses certificates to authenticate users when they log on. (A user name and password is not required.) Exchange Server 2003 Mailbox Coexistence This is an added step that requires some simple changes, and must be performed whether TMG is used to access Exchange 2010 or not. Use the following steps to enable this for access to Exchange 2003 mailboxes. Apply the hotfix from the following article (or one that has a later version of EXADMIN.DLL) to the Exchange 2003 servers where the mailboxes are homed.937031 Event ID 1036 is logged on an Exchange 2007 server that is running the CAS role when mobile devices connect to the Exchange 2007 server to access mailboxes on an Exchange 2003 back-end server The article instructs you to run a command to configure IIS to support both Kerberos and NTLM. You must run the command the command prompt using CSCRIPT, as shown below: Click Start > Run > type cmd and press Enter. Type cd \Windows\System32\AdminScripts and press Enter. Type the following command and press enter: cscript adsutil.vbs set w3svc/WebSite/root/NTAuthenticationProviders "Negotiate,NTLM" On the Exchange 2003 mailbox server, launch Exchange System Manager and follow these steps: Expand the Administrative Group that contains the Exchange 2003 server. Expand Servers > expand <server name> > Protocols > HTTP Exchange Virtual Server Right click the Microsoft-Server-ActiveSync virtual directory and select Properties On the Access tab, click Authentication and ensure that only Integrated Windows Authentication checkbox is checked. Use the following steps to configure the Exchange 2010 to Exchange 2003 communication for Kerberos Constrained Delegation (KCD). Open Active Directory Users and Computers on the CAS Right click the CAS computer account > Properties In the Delegation tab, select Trust this computer for delegation to specified services only Select Use any authentication protocol option and then click Add In the Add Services window, click Users or Computers and enter the name of the Exchange 2003 server that the CAS is delegating to Click OK A list of Service Principal Names (SPN) will be displayed for your server. Select the appropriate HTTP SPN for the Exchange 2003 server. You'll need to add the correct SPN, HTTP/serverFQDN Note: You may need to add the SPN as per Setspn Overview Thanks to: DJ Ball for his previous work in documenting certificate based authentication for Outlook Web App (see How to Configure Certificate Based Authentication for OWA - Part I and How to Configure Certificate Based Authentication for OWA - Part II Mattias Lundgren, for starting the documentation process on certificate based authentication for EAS . DJ Ball and Will Duff for reviewing this document. Henning Peterson and Craig Robicheaux for reviewing this document. Greg Taylor for technical review. Jeff Miller172KViews0likes13CommentsExchange On-Premises Best Practices for Migrations from 2010 to 2016
We have had some requests for guidance on moving from on-premises Exchange 2010 to 2016. If you have a hybrid configuration, mailboxes, or public folders on Exchange 2010, you should prepare to install Exchange 2016 before October 13, 2020.111KViews8likes5CommentsNew Exchange Online migration options
We are in the process of rolling out a new migration experience that will greatly simplify your journey to Office 365 and Exchange Online. This new experience will help any customer running at least one Exchange 2010, 2013 and/or 2016 server on-premises to migrate to the cloud seamlessly. When you initiate the migration, we evaluate what you have configured already in Exchange Online and we walk you through the Hybrid Configuration Wizard to evaluate the on-premises environment. Once all the information on your current state is collected, we ask a couple of questions about your desired state (things like how fast you want to move to Exchange Online and whether you require advanced features). The hybrid wizard then walks you through the configuration needed to migrate your users to Exchange Online. Based on your current configuration and the options selected while running the hybrid wizard, you will be taken down one of the following paths. Full Hybrid: This is a common configuration for customers that are larger in size and will take some time to migrate or customers that will not be able to move all their mailboxes to Exchange Online in the short to medium term. This is the most complex option to configure, but will give you enhanced features like cross-premises free/busy and enhanced mail flow options. For more on Full Hybrid you can go here. Minimal Hybrid: This is a recently introduced option that was added to the Hybrid Configuration Wizard in June. It allows you to configure your environment to support hybrid migrations and recipient administration without the need for the additional overhead of configuring free/busy and other enhanced features of full hybrid. Often this is used for customers that want to move all their mailboxes to Exchange Online over the course of a couple of months or less, but want to keep directory synchronization in place. For more information on Minimal Hybrid please go here. Express Migration: The newest option we have added is the Express Migration. This is the path in the Hybrid Configuration Wizard that will benefit smaller customers or a customer that truly wants to move to Exchange Online over the course of a couple of weeks or less. If you have a need to keep directory synchronization in place, then this is not the option for you. This option will configure your users and walk you through the new migration experience to get the mailboxes to Exchange Online with minimal disruption for your users. More About Express Migration In the past, an administrator of the small to medium business had to choose to either take the complex configuration route of hybrid or the complex user experience of a cutover or staged migration. Now you can get a greatly simplified Express Migration experience. With the Express Migration option, you will get a onetime directory synchronization of your users along with the Minimal Hybrid configuration to allow you to perform the migrations. After that initial synchronization of user accounts, directory synchronization will be automatically disabled in both Office 365 and on-premises. This will give small and medium customers that would have previously perform a cutover migration the ability to get the benefits of the hybrid migration without the overhead. The following are the benefits of performing an Express Migration over a more traditional cutover migration: Usernames and passwords will sync from on-premises Users will not have to recreate Outlook profiles Mail flow will continue to work between users before, during, and after the migration There is essentially no down time for users during the migration How to initiate the migration All the migration approaches discussed in this article (Full, Minimal, and Express) can be initiated from one interface now. We will guide you to the proper option as we go. One of key components in this new experience is the migration pane. This is a new pane we have created in the Office 365 Admin Portal that will match the modern look and feel of the rest of the portal. However, it is not just the look and feel that is revamped, we also have a lot of intelligence built in. For instance, when you enter the migration pane we will detect if you have executed the Hybrid Configuration Wizard, already synchronized your users, and whether there is a migration endpoint already created. Depending on what is already in place we can take you toward the proper migration approach. To get to the new migration experience you will have to expand the “Users” section in the portal (Http://portal.office.com) and then select the “Data Migration” option. Once there, select the Exchange email source to initiate the experience. This is a page where we list various sources from where you can migrate. In this case, you would select the Exchange option. Once the source is selected we perform a quick check of the tenant to see if you have executed the Hybrid Configuration Wizard. If you have not yet executed it, that means we know you have not prepared your on-premises environment for migration. Therefore, we will take you through downloading and running the hybrid wizard. When in the hybrid wizard you will see a welcome screen, just select next on that: You will then see the Server detection screen, again just select next since the correct server should be detected. By default, we will try to connect to the Exchange server running the latest version. Provide your credentials for both Exchange on-premises and Exchange Online. Select the appropriate migration option. For this example, we are demonstrating Express Migration so we will select the Minimal Hybrid option. Select update, which will perform all the configurations needed to allow you to start moving mailboxes at a later step. Next up: Provisioning users If you have not synchronized your users already at this point you will see an option to perform a onetime sync of your users or to install AAD Connect separately. As mentioned, if you plan on moving all of your mailboxes to Exchange Online and do not have the need to keep directory synchronization around, then select the one-time sync option. Selecting this option is what we consider an “Express Migration”. If you need directory synchronization for any reason, then you need to install and setup AAD Connect separately. Note: if you did select the one-time sync and later decided that you do need to have directory synchronization, you can setup AAD Connect to perform directory synchronization. If you did not select Minimal Hybrid as an option in the wizard, then you will not be given the one-time directory synchronization option. The reason you would not get it if you opted for Full Hybrid is because that deployment scenario requires the advanced coexistence features and directory synchronization. Once the option for “Synchronize my users and passwords one time” is selected, the hybrid wizard displays a progress bar. The wizard is waiting for the AAD Connect to be configured and for the initial set of users to synchronize. To complete the hybrid configuration setup, you will need to configure AAD Connect. In almost all cases the default options in AAD Connect are the best options. Once completed you will see the ending page for the hybrid application that will allow you to give feedback. Once closed you will be taken to the user list page to start moving mailboxes. On the user list page you will have the option to select the users you want to migrate. Under the hood this is using MRS to migrate the users just like a hybrid migration. You can use this new pane to migrate mailboxes, regardless of the hybrid deployment option chosen. The experience is as simple as selecting the users and clicking start migration. At that time, we will perform a lookup to see if the prerequisites are in place. If anything is missing, we will walk you through taking care of the missing prerequisite. Limitations There are some limitations with the new Migration pane that need to be called out. We are working to overcome these limitations in the future: Only one migration endpoint is supported: If there is more than one endpoint we will choose the one created by the Hybrid wizard or by the new Migration experience. Only one batch can be started at a time: we do not yet support multiple batches with this migration pane. This means that you need to wait for the previous migration to finish before you can start another migration. We know multiple batch support is important for medium and larger customers, we have this as a top priority. We do require you license users separately: we require that all users being migrated through this experience be licensed before the migration begins (except for shared mailbox object type), this is not an automatic process so you need to license users before migrating. Conclusion These updates will make the Exchange Online onboarding experience more seamless for many Exchange customers. We have created and are continuing to improve the Migration pane to meet the needs of our customers. While running through the experience if you have any feedback please use the provided feedback button that are available throughout the experience. Office 365 On-boarding Team90KViews0likes23CommentsExchange Server 2013 Service Pack 1 Coming in Early 2014
Today on the Office blog we announced that service pack 1 for the 2013 set of products including Office, SharePoint and Exchange will be released early next year. We know our Exchange customers have been looking for confirmation of the release but also have a desire for an early look at what's coming with Exchange Server 2013 Service Pack 1 (SP1). So let's have a first look a few things you can expect to see in SP1. But wait… we haven’t released CU3 – well, news about CU3 is imminent - stay tuned for more information about CU3 coming very soon. In this post we are highlighting a few of the notable improvements to be included in SP1. This isn't an all-inclusive list, so stay tuned for additional details as we approach release. Windows Server 2012 R2 Support First answering one the most common questions since the release of Windows Server 2012 R2. Exchange 2013 SP1 will add Windows Server 2012 R2 as a supported operating system for Exchange Server 2013 with SP1. Let your planning begin. S/MIME support for OWA Support for S/MIME in OWA will be brought back in SP1. With SP1 customers will have S/MIME support across Outlook, Exchange ActiveSync clients, and OWA. Edge Transport Server Role The Edge Transport server role for Exchange Server 2013 will be available with SP1. Fixes and Improvements Of course, SP1 will include fixes and improvements in areas you've helped us identity. SP1 is the first service pack issued in the new Exchange Server cumulative update release model - thus SP1 is essentially CU4. The installation of SP1 will follow the same process as the prior Exchange 2013 CU releases. SP1 will include all fixes included in previously released cumulative updates for Exchange 2013. SP1 will require customers to update their Active Directory schema - customers should assume this requirement for all Exchange Server 2013 updates. Plan for this required update to quickly take advantage SP1 updates. Active Directory Schema updates for Exchange are additive and always backwards compatible with previous releases and versions. On behalf of the Exchange Product Group, thanks again for your continued support. As always, let us know what you think! Brian Shiers Exchange Technical Product Manager90KViews0likes123CommentsLife in a Post TMG World – Is It As Scary As You Think?
Let’s start this post about Exchange with a common question: Now that Microsoft has stopped selling TMG, should I rip it out and find something else to publish Exchange with? I have occasionally tried to answer this question with an analogy. Let’s try it. My car (let’s call it Threat Management Gateway, or TMG for short), isn’t actively developed or sold any more (like TMG). However, it (TMG) works fine right now, it does what I need (publishes Exchange securely) and I can get parts for it and have it serviced as needed (extended support for TMG ends 2020) and so I ‘m keeping it. When it eventually either doesn’t meet my requirements (I want to publish something it can’t do) or runs out of life (2020, but it could be later if I am ok to accept the risk of no support) then I’ll replace it. Now, it might seem odd to offer up a car analogy to explain why Microsoft no longer selling TMG is not a reason for Exchange customers to panic, but I hope you’ll agree, it works, and leads you to conclude that when something stops being sold, like your car, it doesn’t immediately mean you replace it, but instead think about the situation and decide what to do next. You might well decide to go ahead and replace TMG simply based on our decision to stop selling or updating it, that’s fine, but just make sure you are thinking the decision through. Of course, you might also decide not to buy another car. Your needs have changed. Think about that. Here are some interesting Exchange-related facts to help further cement the idea I’m eventually going to get to. We do not require traffic to be authenticated prior to hitting services in front of Exchange Online. We do not do any form of pre-authentication of services in front of our corporate, on-premises messaging deployments either. We have spent an awfully large amount of time as a company working on securing our code, writing secure code, testing our code for security, and understanding the threats that exist to our code. This is why we feel confident enough to do #1 and #2. We have come to learn that adding layers of security often adds little additional security, but certainly lots of complexity. We have invested in getting our policies right and monitoring our systems. This basically says we didn’t buy another car when ours didn’t meet our needs any more. We don’t use TMG to protect ourselves any more. Why did we decide that? To explain that, you have to cast your mind back to the days of Exchange and Windows 2000. The first thing to admit is that our code was less ‘optimal’ (that’s a polite way of putting it), and there were security issues caused by anonymous access. So, how did we (Exchange) tell you to guard against them? By using something called ISA (Internet Security and Acceleration – which is an odd name for what it was, a firewall). ISA, amongst other things, did pre-authentication of connections. It forced users to authenticate to it, so it could then allow only authenticated users access to Exchange. It essentially stopped anonymous users getting to Windows and Exchange. Which was good for Windows and Exchange, because there were all kinds of things that they could do if they got there anonymously. However once authenticated users got access, they too could still do those bad things if they chose to. And so of course could anyone not coming through ISA, such as internal users. So why would you use ISA? It was so that you would know who these external users were wouldn’t you? But do you really think that’s true? Do you think most customers a) noticed something bad was going on and b) trawled logs to find out who it was who did it? No, they didn’t. So it was a bit like an insurance policy. You bought it, you knew you had it, you didn’t really check to see if it covers what you were doing until you needed it, and by then, it was too late, you found out your policy didn’t cover that scenario and you were in the deep doo doo. Insurance alone is not enough. If you put any security device in front of anything, it doesn’t mean you can or should just walk away and call it secure. So at around the same time as we were telling customers to use ISA, back in the 2000 days, the whole millennium bug thing was over, and the proliferation of the PC, and the Internet was continuing to expand. This is a very nice write up on the Microsoft view of the world. Those industry changes ultimately resulted in something we called Trustworthy Computing. Which was all about changing the way we develop software – “The data our software and services store on behalf of our customers should be protected from harm and used or modified only in appropriate ways. Security models should be easy for developers to understand and build into their applications.” There was also the Secure Windows Initiative. And the Security Development Lifecycle. And many other three letter acronyms I’m sure, because whatever it was you did, it needed a good TLA. We made a lot of progress over those ten years since then. We delivered on the goal that the security of the application can be better managed inside the OS and the application rather than at the network layer. But of course most people still seem to think of security as being mainly at the network layer, so think for a moment about what your hardware/software/appliance based firewall does today. It allows connections from a destination, on some configurable protocol/port, to a configured destination protocol/port. If you have a load balancer, and you configure it to allow inbound connections to an IP on its external interface, to TCP 443 specifically, telling it to ignore everything else, and it takes those packets and forward them to your Exchange servers, is that not the same thing as a firewall? Your load balancer is a packet filtering firewall. Don’t tell your load balancing vendor that, they might want to charge you extra for it, but it is. And when you couple that packet level filtering firewall/load balancer with software behind it that has been hardened for 10 years against attacks, you have a pretty darn secure setup. And that is the point. If you hang one leg of your load balancer on the Internet, and one leg on your LAN, and you operate a secure and well managed Windows/Exchange Server – you have a more secure environment than you think. Adding pre-authentication and layers of networking complexity in front of that buys you very little extra, if anything. So let’s apply this directly to Exchange, and try and offer you some advice from all of this. What should YOU do? The first thing to realize is that you now have a CHOICE. And the real goal of this post is to help you make an INFORMED choice. If you understand the risks, and know what you can and cannot do to mitigate them, you can make better decisions. Do I think everyone should throw out that TMG box they have today and go firewall commando? No. not at all. I think they should evaluate what it does for them, and, if they need it going forward. If they do that, and decide they still want pre-auth, then find something that can do it, when the time to replace TMG comes. You could consider it a sliding scale, of choice. Something like this perhaps; So this illustrated that there are some options and choices; Just use a load balancer – as discussed previously, a load balancer only allowing in specified traffic, is a packet filtering firewall. You can’t just put it there and leave it though, you need to make sure you keep it up to date, your servers up to date and possibly employ some form of IDS solution to tell you if there’s a problem. This is what Office 365 does. TMG/UAG – at the other end of the scale are the old school ‘application level’ firewall products. Microsoft has stopped selling TMG, but as I said earlier, that doesn’t mean you can’t use it if you already have it, and it doesn’t stop you using it if you buy an appliance with it embedded. In the middle of these two extremes (though ARR is further to the left of the spectrum as shown in the diagram) are some other options. Some load balancing vendors offer pre-authentication modules, if you absolutely must have pre-auth (but again, really… you should question the reason), some use LDAP, some require domain joining the appliance and using Kerberos Constrained Delegation, and Microsoft has two options here too. The first, (and favored by pirates the world over) is Application Request Routing, or ARR! for short. ARR! (the ! is my own addition, marketing didn’t add that to the acronym but if marketing were run by pirates, they would have) “is a proxy based routing module that forwards HTTP requests to application servers based on HTTP headers and server variables, and load balance algorithms” – read about it here, and in the series of blog posts we’ll be posting here in the not too distant future. It is a reverse proxy. It does not do pre-authentication, but it does let you put a non-domain joined machine in front of Exchange to terminate the SSL, if your 1990’s style security policy absolutely requires it, ARR is an option. The second is WAP. Another TLA. Recently announced at TechEd 2013 in New Orleans is the upcoming Windows Server 2012 R2 feature – Web Application Proxy. A Windows 2012 feature that is focused on browser and device based access and with strong ADFS support and WAP is the direction the Windows team are investing in these days. It can currently offer pre-authentication for OWA access, but not for Outlook Anywhere or ActiveSync. See a video of the TechEd session here (the US session) and here (the Europe session). Of course all this does raise some tough questions. So let’s try and answer a few of those; Q: I hear what you are saying, but Windows is totally insecure, my security guy told me so. A: Yes, he’s right. Well he was right, in the yesteryear world in which he formed that opinion. But times have changed, and when was the last time he verified that belief? Is it still true? Do things change in this industry? Q: My security guy says Microsoft keeps releasing security patches and surely that’s a sign that their software is full of holes? A: Or is the opposite true? All software has the potential for bugs and exploits, and not telling customers about risks, or releasing patches for issues discovered is negligent. Microsoft takes the view that informed customers are safer customers, and making vulnerabilities and mitigations known is the best way of protecting against them. Q: My security guy says he can’t keep up with the patches and so he wants to make the server ‘secure’ and then leave it alone. Is that a good idea? A: No. It’s not (I hope) what he does with his routers and hardware based firewalls is it? Software is a point in time piece of code. Security software guards against exploits and attacks it knows of today. What about tomorrow? None of us are saying Windows, or any other vendor’s solution is secure forever, which is why a well-managed and secure network keeps machines monitored and patched. If he does not patch other devices in the chain, overall security is compromised. Patches are the reality of life today, and they are the way we keep up with the bad guys. Q: My security guy says his hardware based firewall appliance is much more secure than any Windows box. A: Sure. Right up to the point at which that device has a vulnerability exposed. Any security device is only as secure as the code that was written to counter the threats known at that time. After that, then it’s all the same, they can all be exploited. Q: My security guy says I can’t have traffic going all the way through his 2 layers of DMZ and multitude of devices, because it is policy. It is more secure if it gets terminated and inspected at every level. A: Policy. I love it when I hear that. Who made the policy? And when? Was it a few years back? Have the business requirements changed since then? Have the risks they saw back then changed any? Sure, they have, but rarely does the policy get updated. It’s very hard to change the entire architecture for Exchange, but I think it’s fair to question the policy. If they must have multiple layers, for whatever perceived benefit that gives (ask them what it really does, and how they know when a layer has been breached), there are ways to do that, but one could argue that more layers doesn’t necessarily make it better, it just makes it harder. Harder to monitor, and to manage. Q: My security guy says if I don’t allow access from outside except through a VPN, we are more secure. A: But every client who connects via a VPN adds one more gateway/endpoint to the network don’t they? And they have access to everything on the network rather than just to a single port/protocol. How is that necessarily more secure? Plus, how many users like VPN’s? Does making it harder to connect and get email, so people can do their job, make them more productive? No, it usually means they might do less work as they cannot bothered to input a little code, just so they can check email. Q: My security guy says if we allow users to authenticate from the Internet to Exchange then we will be exposed to an account lockout Denial of Service (DoS). A: Yes, he’s right. Well, he’s right only because account lockout policies are being used, something we’ve been advising against for years, as they invite account lockout DoS’s. These days, users typically have their SMTP address set to equal their User Principal Name (UPN) so they can log on with (what they think is) their email address. If you know someone’s email address, you know their account logon name. Is that a problem? Well, only if you use account lockout policies rather than using strong password/phrases and monitoring. That’s what we have been telling people for years. But many security people feel that account lockouts are their first line of defense against dictionary attacks trying to steal passwords. In fact, you could also argue that a bad guy trying out passwords and getting locked out now knows the account he’s trying is valid… Note the common theme in these questions is obviously – “the security g uy said…..”. And it’s not that I have it in for security guys generally speaking, but given they are the people who ask these questions, and in my experience some of them think their job is to secure access by preventing access. If you can’t get to it, it must be safe right? Wrong. Their job is to secure the business requirements. Or put another way, to allow their business to do their work, securely. After all, most businesses are not in the business of security. They make pencils. Or cupcakes. Or do something else. And is the job of the security folks working at those companies to help them make pencils, or cupcakes, securely, and not to stop them from doing those things? So there you go, you have choices. What should you choose? I’m fine with you choosing any of them, but only if you choose the one that meets your needs, based on your comfort with risk, based on your operational level of skill, and based on your budget. Greg Taylor Principal Program Manager Lead Exchange Customer Adoption Team84KViews1like41CommentsRobert’s Rules of Exchange: Multi-Role Servers
Overview Robert's Rules of Exchange is a series of blog posts in which we take a fictitious company, describe their existing Exchange implementation, and then walk you through the design, installation, and configuration of their Exchange 2010 environment. See Robert's Rules of Exchange: Table of Blogs for all posts in this series. A great big hello to all my faithful readers out there! I have to apologize for not posting in a while. Since the beginning of the holidays, I have been exceedingly busy, but I really want to get back to the Robert’s Rules posts, and I hope to be more involved in this going forward. Also, please keep the great comments and questions coming! I read and try to answer every single one of them. If you are going to TechEd North America 2011 in Atlanta, I’ll be there presenting, doing some stuff with the MCM Exchange team and generally making a nuisance of myself, so come look me up and introduce yourself! In this blog post, I want to talk a little about multi-role servers. This is something that the Exchange team and Microsoft Services presents as the first notional solution (“notional solution” meaning the first idea of a solution, the first “rough draft” of what we propose deploying) to almost every customer we work with, and something that causes a lot of confusion since it is certainly a different set of guidance than our previous guidance. So, I want to talk about what we mean by “multi-role servers”, why we think they are a great solution, sizing multi-role deployments, when they might not fit your deployment, and what the real impact is in moving away from multi-role in your solution. What do we mean “Multi-Role Servers”? When we talk about multi-role servers in Exchange 2010, we are talking about a single server with all three of the core Exchange 2010 roles installed – Client Access, Hub Transport and Mailbox roles. While having any given two of these roles installed is technically a multi-role deployment, and even in Exchange Server 2007 we saw a lot of customers collocating the Client Access and Hub Transport roles, when we talk about multi-role, we are really talking about collocation of all three core roles on the same server. In Exchange 2007, we did not support the Client Access or Hub Transport roles on clustered servers, so there was no way to have a high availability deployment (meaning Cluster Continuous Replication clusters) with multi-role servers. In Exchange 2010, we introduced Database Availability Groups (DAGs), which don't have that limitation, and subsequently we have changed the first line recommendation to utilize multi-role servers where possible. The rest of this post discusses why we believe that in almost every single scenario, this is the best solution for our customers. Why “Multi-Role” for Exchange 2010? One of the things I really tried to hammer home in the Storage Planning post was the idea of simplicity. Every time I sit down with a customer to discuss how they should deploy Exchange 2010, I start with the most simple solution that I can possibly think of. Remember that we have already argued that simplicity typically brings many “Good Things™” into a solution. These “Good Things™” include (but are not limited to) a lower capital expenditure (CapEx), a lower operational expenditure (OpEx), a higher chance of success of both deployment and meeting operational requirements such as high availability and site resilience. Complexity, on the other hand, introduces risk. Complexity when it is not needed is a “Bad Thing™”. Of course, when a complexity is brought on because of a requirement, it is not a “Bad Thing™”. It is only when we don’t really need to introduce that complexity that I have a problem with it. Based on the last blog post (the Storage post), we know that Robert’s Rules is going with the simple storage infrastructure of direct attached storage with no RAID – what Microsoft calls JBOD . If we combine that with multi-role servers where every server we deploy in the environment is exactly the same and we significantly reduce the complexity of the system. Every server has the same amount of RAM, the same number of disks, the same network card, the same video card, the same drivers, the same firmware/BIOS – everything is the same. You have less servers in your test lab to track, you have less different drivers or firmware to test, you have an easier time deciding what version of what software or firmware to deploy to what servers. On top of that, every server has exactly the same settings, including the same OS settings and the same Exchange settings, as well as any other agents or virus scanners or whatever on those servers. Everything is exactly the same at the server level. This significantly reduces your OpEx because you have a single platform to both deploy and support – simplicity of management means that your people need to do less work to support those servers, and that means it costs you less money! Separate Role Servers – An Example Now, let’s think about the number of servers we need in an environment. I’m going to play around with the Exchange 2010 Mailbox Role Requirements Calculator a bit here, so I’ve downloaded the latest version (14.4 as of this writing). I will also start with a solution with separate roles across the board – Mailbox, Client Access and Hub Transport all separated. This is totally supported, and what many of my customers believe is the Microsoft recommended approach. After we size that and figure out how many servers we’re talking about, we will look at the multi-role equivalent. Looking on a popular server vendor web site at their small and medium business server page, I found a server that happens to have an Intel X5677 processor, so I’ll use that as my base system – an 8-core server. Using the Exchange Processor Query Tool, I find that servers using this processor at 8-cores per server have an average SPECint 2006 Rate Value of 297, so I’ll use that in the calculator as my processor numbers. Note that by default, the servers in the Role Calculator are not marked as multi-role. Opening the Role Requirements calculator fresh from download with no changes, I’ll put those values in as my server configuration – 8 cores and SPECint2006 rate value of 297. Making only that change, we can then look at the Role Requirements page. We have 6 servers in a DAG , 4 copies of the data, and we have 24,000 users, and the server processors will be 36% utilized, and the servers will require 48 GB of RAM. Not bad, all in all. EXCEPT… That is really quite underutilized as far as processor is concerned. Open the calculator yourself, and “hover over” the “Mailbox Role CPU Utilization” field under the “Server Configuration” section of the “Role Requirements” tab. There is a comment to help you understand the utilization numbers. That comment says that for Mailbox role machines in a DAG with only the mailbox role configured, that we should not go over 80% utilization. But we’re at 36% utilization. That’s a lot of wasted processor! We just spent the money on an 8-core system, and we aren’t using that. So, I’m going to pull 4 cores out of each server. According to the Processor Query Tool, a 4-core system with that processor will have a SPECint2006 rate value of 153. Let’s see what that does by putting that into the Mailbox Role Calculator. That moves us to 69% processor utilization, which is much better. I would feel much better recommending that to one of my customers. This change didn’t affect our memory requirements at all. The next thing we’ll look at is the number of cores of the other roles we need. At the top of the “Role Requirements” tab, we can see that this solution will require a minimum of 2 cores of Hub Transport, and 8 cores of Client Access. So, as good engineers we propose our customers have one 4-core server for Hub Transport and two 4-core servers for Client Access, right? Absolutely not! We designed this solution for 2 server failures (6 servers in the DAG with 4 copies can sustain 3 server failures, which is a bit excessive, but sustaining 2 server failures is quite common, so for our example we’ll stick with that). So, for CAS and HT both, we need 2 additional servers for server failure scenarios. If I lose 2 CAS servers, I still need to have 8 cores online on my remaining CAS servers to support a fully utilized environment – that means I need a minimum of 4 CAS servers with 4-cores each. If I lose 2 HT servers, I need 1 remaining server to handle my message traffic load (really one half of a server – 2 cores – but you can’t do that), so I need a minimum of 3 HT servers. How many servers is this all together? We have 6 Mailbox servers, 4 Client Access servers, and 3 Hub Transport servers. That would be 13 servers, in total. Not too bad for 24,000 users, right? What are the memory models of these three servers? CAS guidance is 2 GB per core, and HT guidance is 1 GB per core. So we have 6 servers with 48 GB (Mailbox), 4 servers with 8 GB (CAS) and 4 servers with 4 GB (HT). Our relatively simple environment here has 3 different server types, 3 different memory configurations, 3 different OS configurations, 3 different Exchange configurations, and 3 different patching/maintenance plans. Simple? I think not. Multi-Role Servers – An Example Now, using the same server, the same processor, the same everything as above, we’ll simply change the calculator to multi-role. On the “Input” tab, I change the multi-role switch (“Server Multi-Role Configuration (MBX+CAS+HT)”) to “Yes”. Now, over to the “Role Requirements” tab, and … WHOA!!! Big red blotch on my spreadsheet! What does that mean? Can’t be good. Once again, if we look at the comment that the sadistic individual who wrote the calculator has left for us, we can see that a multi-role server in a mailbox resiliency solution should not have 40% or higher utilization for the Mailbox role. This is because we have the Client Access and Hub Transport roles on the server as well, and they have processor requirements. What we basically do here is allocate half of our processor utilization to the CAS and HT roles in this situation. So, let’s go back to the 8 cores per server using SPECint2006 rate value of 297. That change gets us back into the right utilization range. Looking at the “Role Requirements” tab again, we now have a “Mailbox Role CPU Utilization” of 36%. Since the maximum we want is 39%, that is a decent utilization of our hardware. The other change we see is that we were bumped from 48 GB of RAM per server to 64 GB of RAM, which is another cost impact to the price of the servers, but the bump from 48 GB to 64 GB is not nearly as expensive as it was a few years ago, and I see a lot of customers purchasing 64 GB if not more RAM in almost all of their servers (I actually see a lot of 128 GB machines out there). Now, the great thing about this is the fact that we are down to 6 servers total. Let’s think about the things that go into the total cost of ownership of servers: Cost of hardware: This will be based on the servers themselves, and the vendor we select, but the cost of one 4-core processor per server and 16 GB of RAM per server vs. the cost of 7 additional servers shouldn’t be hard to figure out. Cost of floor space or rack space: Every time you add another server into the racks, you have to pay for that in some way. Quite a few of my customers are quite space constrained in their datacenters, and adding new servers is difficult or impossible. Server consolidation is a huge push for almost all of my customers and consolidating 20 or 30 servers down to 6 has “more win” for the project than consolidating the same number of servers down to 13. Also, by having both processor sockets in our servers populated, we have a better “core density” for the amount of rack space we take. Removing the processor from the socket on this one server doesn’t change the amount of physical space that the server takes! Cost of HVAC: Obviously more processor and more RAM means more heat generated by each of the 6 Mailbox servers we have, but having 7 additional servers would generate even more heat than the additional processors and RAM would. Easy to see the cost savings here. Cost of maintenance Once again, this is an easy win. 6 servers will be easier and cheaper to manage than 13 servers would, especially when you consider that those 6 servers are identical rather than having 3 separate server builds and configurations in our separate role servers. Something to keep in mind here is the fact that in almost every scenario, changes in the OpEx over the life of a solution (typically 3-5 years) is significantly more impactful than changes in the CapEx. I think that the bottom line here is that there are a lot of reasons to start with the multi-role server as your “first cut” architecture, not the least of which is the simplicity of the design compared to the complexity introduced by having each role separated. When is Multi-Role Not Right for You? There are very few cases where the multi-role servers are not the appropriate choice, and quite often in those cases, manipulating the number of servers or the number of DAGs slightly will change things so that the multi-role is possible. Can You Use Multi-Role in a Virtualized Environment? Let’s think about the case with virtualization. Our guidance around virtualization is that a good “rule of thumb” overhead factor is 10% overhead for the user load on a server if you are virtualizing. In other words, a given guest machine will be able to do 10% less work than expected if it was a physical machine with the exact same physical processor. Or, another way to look at this is that each user will cause about 10% more work in the guest than they would in a physical implementation. So, I can use a 1.1 “Megacycle Multiplication Factor” in my user tier definition on the “Input” tab, and that just puts me to 39% processor utilization for this hardware. Of course, we haven’t taken into account the fact that we haven’t allocated processors for the host, and the fact that we have to pay extra licensing fees to VMware if we want to run 8-core guest OSes. If we go back to our 4-core example, set our “Megacycle Multiplication Factor” to 1.1, and say that we have 2 DAGs rather than 1, our processor utilization for these 4-core multi-role servers goes to 38%, making this a reasonable virtualized multi-role solution. Other customers might decide to split the roles out, possibly with say 6 Mailbox role servers virtualized, and CAS and HT collocated on 6 more virtualized servers. Either of these solutions are certainly supported solutions, but we now would have twice as many servers to monitor and manage as we would with our physical multi-role servers. And as we saw above – more servers will typically mean more cost in your patching OpEx. What we’re really trying to do here is force a solution (virtualization) when the requirements don’t drive us that direction in many cases. Don’t get me wrong – I fully support and recommend a virtualized Exchange environment when the customer’s requirements drive us to leverage virtualization as a tool to give them the best Exchange solution for the money they spend, but when customers want to virtualize Exchange just because they want every workload virtualized, that is trying to shove a round peg into a square hole. Note that the next installment of Robert’s Rules will be around virtualized Exchange, when it is right, and when it is wrong. Can You Use Multi-Role on Blade Servers? I see this as exactly the same question as the virtualization question above. Certain sizing situations might take the proposed solution out of the realm of where blade servers can provide a hardware capability that is required. For instance, I have seen cases where the hardware requirements of the multi-role servers are quite a bit more than a blade server can provide (think 16, 24 or 32-core machines with 128 GB of RAM or similar). Generally, this means that you have a very high user load on those servers, and you could reduce the core count or memory requirements by reducing the number of users per server. As we showed above, you can do this by adding more servers or more DAGs. Multi-Role and Software Load Balancing We all know that the recommendation for Exchange 2010 is to use hardware load balancers, but the fact is that using Windows Network Load Balancing (WNLB - a software load balancer that comes with Windows Server 2008 and Windows Server 2008 R2 as well as older versions) is supported and some customers will use that. Hardware load balancers cost money. Sometimes there is no money for purchase of hardware load balancers for smaller implementations – although there are some very cost effective hardware load balancers from some Microsoft partners out there today that could get you into a highly available hardware load balanced solution for approximately US$3,000.00. So I would argue that there are very few customers that couldn’t afford that. The single real limitation of the multi-role is the fact that WNLB is not supported on the same server running Windows Failover Clustering. Although Exchange 2010 administrators never have to deal with cluster.exe or any other cluster admin tools, the fact is that DAG s utilize Windows Failover Clustering services to provide a framework for the DAG, utilizing features such as the quorum model and a few other things. This means that if you have a DAG in a multi-role architecture, and you need load balancing (as all highly available solutions will need), you will be forced to purchase a hardware load balancer. In some organizations where the network team has standardized on a given load-balancing appliance vendor and their branch offices need Exchange in high availability deployed at the branch office locations, it is possible that they will not be allowed to purchase another vendor’s small-load small-cost load-balancer hardware. In cases like this where WNLB is required, a multi-role implementation will not be possible. Sizing Multi-Role Deployments As we have discussed already, sizing of multi-role Exchange Server 2010 deployments is not much different than the sizing you do today. The Mailbox Role Calculator is already designed to support that configuration. Just choose “Yes” what is the second setting in the top left of the “Input” tab (at least that is where it is in version 14.4 of the calculator), and make sure that your processor utilization isn’t over 39%. Note that the “39%” is really based on the fact that Microsoft recommendations are built around a maximum processor utilization of 80%. In reality, that is an arbitrary number that Microsoft chose because it is quite safe – if we make recommendations around that 80% number and for some reason you have 10% more utilization than you expected, you are still safe because in the worst case, you will then have 90% utilization on those servers. Some of our customers choose 90% as their threshold, and this is a perfectly acceptable number when you know what you are doing and what the risks are around your estimation of how your users will utilize Exchange servers and the processors on those server. The spreadsheet will have red flags, but those are just to make sure you see that there is something outside of our 80% total number. Is there an Impact of not Separating the Roles? There are a few things that are different in the technical details of how Exchange works and how you will manage Exchange when you consider having multi-role or separated role implementations. For instance, when the Mailbox role hosting a mailbox database and Hub Transport role coexist on a server within a DAG, the two roles are aware of this. In this scenario when a user sends a message, if possible, the Mailbox Submission service on the Mailbox role on this server will select a Hub Transport role on another server within the Active Directory site to submit the message for transport (it will fall back to the Hub Transport role collocated on that server if there is not another Hub Transport role in the AD site). This allows Exchange as a system to remove the single point of failure where a message goes from the mailbox database on a server to transport on the same server and then the server fails, thus not allowing shadow redundancy to provide transport resilience. The reverse is also true – if a message is delivered from outside the site to the Hub Transport that is collocated with the Mailbox role holding the destination mailbox, the message will be redirected through another Hub Transport role if possible to provide for transport redundancy. For more information on this, please see Hub Transport and Mailbox Server Roles Coexistence When Using DAGs in the documentation. Another thing to note is that when you design your DAG implementation, you should always define where the file share will exist for your File Share Witness (FSW) cluster resource. When you create the DAG with the New-DatabaseAvailabilityGroup cmdlet, you can choose to not specify the WitnessDirectory and WitnessServer parameters. In this case, Exchange will choose a server and directory for your FSW , typically on a Hub Transport server. Well, in the case where every Hub Transport server in the Active Directory site is also a member of that DAG, this introduces a problem! Where do you store the File Share Witness? My solution to that is to have another machine (could be virtual, but if so, it must be on separate physical hardware than your DAG servers) that can host a file share. This could be a machine designated as a file share host, or a printer host, or similar. I wouldn’t recommend a Global Catalog or Domain Controller server because as an administrator, I don’t want file shares on my Domain Controllers for security reasons and I don’t want to grant my Exchange Trusted Subsystem security group domain administrator privileges! There are also some management scenarios that you might need to account for. For instance, when you patch a multi-role server, you might have to change your patching plans compared to what you are doing with your Exchange 2003 or 2007 implementation today. For more information on this, please see my Patching the Multi-Role Server DAG post. You will configure Exchange the same way whether you have the roles separated or you have a multi-role deployment. The majority of the difference comes down to what we have already discussed: sizing of the servers, simplicity in the design of your implementation, simplicity of managing the servers, and in most cases cost savings because of less servers and more simple management. Choosing not to deploy multi-role Exchange 2010 architectures introduces complexity to your system that is most likely not required, and that introduces costs and raises risk to your Exchange implementation (remember that every complexity raises risks, no matter how small the complexity or how small the risk). Conclusion The conclusion here is the same thing you will hear me saying over and over again. You should always start your Exchange Server 2010 design efforts around the most simple solution possible – multi-role servers with JBOD storage (no-RAID on direct attached storage). Only move away from the simplest solution if there is a real reason to do so. As I said, I always start design discussions with multi-role, and that is the recommended solution from the Exchange team. If that is the recommendation, then why is Robert’s Rules not using multi-role?!?! When I first designed this blog series, the idea was that I wanted to make sure I show how to do load balancing. At the time, we didn’t have easily available virtualized versions of hardware load balancers, at least not on the Hyper-V platform. Since starting the series Kemp has a Hyper-V version of their load balancer available, and I am going to show that in the blog. Ross Smith IV has been telling me over and over that there is little value in showing Windows Network Load Balancing since we strongly recommend a hardware load balancer in large enterprise deployments. SO… I’m going to redesign my Exchange 2010 implementation at Robert’s Rules to utilize the multi-role environment. Before writing the next post (on Virtualization of Exchange, as I said above), I will revise the scenario article with a new image and utilization of a 4-node DAG – 2 nodes in the HSV datacenter, 2 nodes in the LFH datacenter. And once again, thanks to all of you for reading these posts! Keep up the great questions and comments!! Robert Gillies79KViews0likes33CommentsExchange 2007 Autodiscover and certificates
Update 10/4/2007: Since this post has been published, we've updated the Exchange 2007 Autodiscover Service whitepaper to include this information. Please consult the whitepaper for most up-to-date information. In Exchange 2007, we introduce the idea of the Autodiscover service. This service allows your Outlook 2007 clients to retrieve the URLs that it needs to gain access to the new web services offered by Exchange 2007. These web services ( OAB , Unified Messaging, OOF , and Availability) provide a good portion of the new functionality available to Outlook 2007. For more details about the Outlook 2007 features that light up based on the Exchange server version, please see Outlook 2007 feature matrix based on Exchange Server version. For domain-joined clients, Outlook is able to find the Autodiscover service using a service connection point (SCP). The SCP is an Active Directory entry specific to each client access server. When Outlook 2007 is able to securely connect to the domain and read this entry from Active Directory, it can connect directly to this URL. Once connected to the Autodiscover end-point, the Autodiscover service provides the client with the URLs of the other exchange web services. For non-domain-joined clients or clients that are not able to directly access the domain, Outlook is hard-coded to find the Autodiscover end-point by looking up either https://company.com/Autodiscover/Autodiscover.xml or https://Autodiscover.company.com/Autodiscover/Autodiscover.xml (where company.com is the portion of the user's SMTP e-mail address following the @ sign). This means that to service clients in this scenario we must provide connectivity to one of these URLs. On the surface this should not be hard; but this connection is made over SSL and requires a valid certificate. SSL and Certificates The communication to the Autodiscover end-point and subsequent communications to the services all occur over SSL. This requires that we have valid certificates (signed by a trusted CA , with a Subject Name matching the URL the user is connecting to, and not expired) for the Autodiscover connection point and the services URLs. By default the services URLs are all variations of https://serversname. When you install a Client Access server, Exchange Setup creates a self-signed certificate that meets validity tests for domain joined clients. When a client connects to a server over SSL and the server presents a self-signed certificate, the client will be prompted to verify that the certificate was issued by a trusted authority. The client must explicitly trust the issuing authority. Long-term use of this self-signed certificate is not recommended by Microsoft. Instead, it should be replaced with a commercially available Internet trusted, or a trusted internal PKI Infrastructure issued certificate as soon as possible. The problem is that we must be able to resolve Autodiscover.company.com or company.com with a trusted certificate in addition to other externally published exchange services like OWA. Most of our smaller customers currently only have an Exchange related certificate for use with OWA . This certificate usually has a Subject Name like mail.contoso.com. This continues to work for OWA in Exchange 2007, but now we also have to handle an SSL connection to Autodiscover.contoso.com. (Microsoft recommends using Autodiscover.company.com for the Autodiscover service.) This requires us to either resolve mail.contoso.com and Autodiscover.constoso.com to separate IP addresses, or to provide both names in the same certificate. Multiple names in one certificate Microsoft recommends that you provide both names in the same certificate. This is done thru something called a Unified Communications certificate, also know as a Subject Alternative Name certificate. There are a number of vendors that can provide you with this type of certificate (we covered experience with one of them here). The advantage of a Unified Communications certificate is that it makes configuration of Autodiscover much easier; the down side for our customers is that currently, this type of certificate can cost up to 10 times more than any existing single name certificates that they may already own. Configuration with this type of certificate is very easy. Here is a general outline of the process: Get a Unified Communications certificate for your environment with Autodiscover.company.com and any other names you need. (e.g. mail.company.com, www.company.com , etc) Apply the Certificate to the CAS server. Configure your internal SCP to point to Autodiscover.company.com Configure your Internal and External Service URLs to point to mail.company.com Make sure that your configured URLs resolves via DNS to the expected IP address of the CAS server Alternative methods If you're not able to get a Unified Communications certificate, there are two other methods you can use to get the same level of functionality. Both of these methods are supported by Microsoft but are harder to implement and manage over the longer term and thus could have a higher total cost of ownership depending on your environment. Both also require that you have two external IP addresses available. Method 1: Multiple Certificates To work around the need for a Unified Communications certificate, you can get two certificates for your environment – your existing mail.company.com certificate and a new autodiscover.company.com certificate. As long as we can serve up the correct certificate at the correct time we're able to connect with no issues. Doing this simply requires that we setup two virtual servers on the CAS server. One services Autodiscover.company.com on one IP address and the other services the remaining web services on mail.company.com using a different IP address. Here's an outline of this setup process: Get a certificate for mail.company.com and one for autodiscover.company.com Create a new virtual server in IIS on the Clien Access server Create a new Autodiscover virtual directory in the new virtual server and remove the old one Assign separate IP address and certificates to each Virtual server Configure your internal SCP to point to Autodiscover.company.com Configure your Internal and External Service URLs to point to mail.company.com Make sure that your configured URLs will resolve internally and externally via DNS to the expected IP address for each of the services In this configuration, internal domain member clients find the SCP to make the connection to Autodiscover. External clients find Autodiscover.company.com using DNS to make the connection to Autodiscover. In both cases, the clients are referred to mail.company.com for the actual Exchange Services. Method 2: Http Referral This option allows us to redirect the client to another URL for Autodiscover. The major downside of this configuration is that users are prompted in Outlook to confirm this redirection. There is a check box on the prompt to not get prompted again for this URL to limit the impact for users. To implement this configuration we still have to use two IP addresses and two virtual servers; but we only need one certificate. Here's an outline of this setup process: Create a virtual server on the Client Access server with a new IP address Create a stub Autodiscover virtual directory and Autodiscover.xml file thru IIS Redirect the Autodiscover.xml file to mail.company.com Configure your internal SCP to point to mail.company.com Configure your Internal and External Service URLs to point to mail.company.com Make sure that your configured URLs will resolve via DNS to the expected IP address of the Client Access server In this configuration, domain member clients get the SCP and connect to the Autodiscover service using that URL. External clients connect to Autodiscover.company.com over http and get a referral to mail.company.com. External Outlook clients are prompted the first time they make this connection to verify they are being redirected to a trusted URL; once that is accepted Outlook connects to the mail.company.com for Autodiscover. Conclusion There are multiple ways to setup Exchange 2007 to support Outlook 2007 clients and the new services URLs. You have to decide what method is best for your environment, technical skill level, and cost. Implementation Pros Cons Unified Communications certificate Easy to implement Supports all client connections Future proof Microsoft Recommended best practice Higher cost of the Unified Communications certificate Multiple certificates and websites Lower cost Both sites are secured with SSL Requires an additional public IP address Somewhat complex to setup and maintain Single certificate with redirect Can be done with only your existing certificate No additional Cost Requires an additional public IP address Non Domain joined users receive a prompt when they first connect Somewhat complex to setup Additional resources / related reading: White Paper: Exchange 2007 Autodiscover Service Unified Communications Certificate Partners for Exchange Server and for Communications Server Set-ClientAccessServer cmdlet help How to Configure SSL Certificates to Use Multiple Client Access Server Host Names Deployment Considerations for the Autodiscover Service: Using Multiple Sites for Internet Access to the Autodiscover Service Deployment Considerations for the Autodiscover Service: Hosted Environments and the Autodiscover Service How to Configure Exchange Services for the Autodiscover Service There's an excellent script at http://www.exchangeninjas.com/set-allvdirs that can assist you in setting up all of the URLs and the SCP without having to type out all of the commands. This script is provided as is and use of the script is solely at your own risk. Matthew Byrd76KViews0likes26CommentsModern public folder deployment best practices
Introduction Since the release of Microsoft Exchange Server 2013, we have heard questions regarding the sizing and deployment of modern public folders. It is important to plan migrations for public folders so the client experience with their use is good. In this blog post, we will discuss some of best practices and recommendations regarding modern public folder deployment as well as discuss various related concepts. We will assume that you are already familiar with basic modern public folders concepts so we will not go there (but might link to relevant articles). There is a lot here as we are going through several examples. Use it as a reference! How clients connect to the public folder hierarchy mailbox When the user launches the Outlook desktop client on Windows or Mac, client contacts the Autodiscover service to determine connection settings for the user’s primary mailbox and their archive mailbox if they have one. During the initial response, the Autodiscover service may indicate there are public folders available in the environment by including an XML element named <PublicFolderInformation>. This element will contain the SMTP address of a public folder mailbox within the environment. An additional Autodiscover request will be made to request connection settings for the SMTP address of the public folder mailbox. To provide a value for this element during the initial request/response interaction, the Autodiscover service calls a function named GetPublicFolderRecipient. This function gathers information for the available public folder mailboxes in the environment available for serving hierarchy connections. In most cases the GetPublicFolderRecipient function will (randomly) pick a public folder mailbox from the available list to be handed over to the Autodiscover Service, which in turn gets returned to the client. Another possibility is that the user’s mailbox has a static DefaultPublicFolderMailbox assigned. When a mailbox has a default public folder mailbox assigned, the client will always attempt to connect to this public folder mailbox for hierarchy connections. If that public folder mailbox is not available for some reason, the client will fail to reach the public folder infrastructure and will not fall back to a random public folder mailbox. Something to keep in mind! Note: In the case of Outlook on the web (we’ll call it OWA for short) clients accessing public folders, public folder mailbox access does not rely on the Autodiscover service even though the same selection process logic is used. In other words, OWA uses similar functionality that the Get-Mailbox cmdlet uses to get list of available public folder mailboxes the user can utilize. Even if Autodiscover is offline for some reason in the environment, you may see users successfully accessing public folders via OWA, but not Outlook clients. When are new mailboxes eligible for this process? By default, all public folder mailboxes deployed in the environment can serve hierarchy connections to the clients. However, immediately after creation a new public folder mailbox will not be used by Autodiscover. This is due to the newly created public folder mailbox has not yet completed an initial full hierarchy sync from the primary hierarchy public folder mailbox. This logic is automatically calculated and is reflected in the parameter IsHierarchyReady. By default, the value will be set to $False. If this value remains $False, the GetPublicFolderRecipient function will not return that public folder mailbox to the Autodiscover Service as it is assumed to contain an incomplete copy of the hierarchy (a user connecting to it would not have a complete view of the public folder infrastructure). Once the value of the newly created public folder mailbox’s IsHierarchyReady has changed to $True, the Autodiscover service will be able to hand it out to clients. Under normal conditions this initial full hierarchy sync should start automatically within 24 hours of the public folder mailbox being created. You may also invoke the hierarchy sync manually if you so choose by using the below command: Update-PublicFolderMailbox -Identity "Public folder mailbox name" -SuppressStatus -Fullsync -InvokeSynchronizer The time it takes for the fully hierarchy replication to complete depends on several factors such as (but not limited to) network speed, number of public folders, geographical location of PF mailboxes, and connection load on the primary hierarchy mailbox. The initial hierarchy sync happens using a pull operation model. The secondary public folder mailboxes will poll the primary hierarchy public folder mailbox to fetch the hierarchy information. Once the initial FullSync is complete, the value of IsHierarchyReady will change to True automatically and the public folder mailbox will be available to serve the hierarchy information to requesting clients. To confirm, the following command can be run: Get-Mailbox -PublicFolder | fl name,ishierarchyready Note: While IsHierarchyReady value can be manually forced to True using PowerShell, this is not supported. Doing this can cause the public folder mailboxes to serve incomplete hierarchy to clients. The recommendation is wait for the initial sync to complete or manually invoke the hierarchy sync to get the hierarchy replicated to the new public folder mailboxes. Once the initial hierarchy sync is complete for the public folder mailbox, the next hierarchy sync hereon will happen using the push model, where the primary hierarchy mailbox will push the changes to the secondary hierarchy public folder mailbox every 10 minutes. Another setting an administrator has at their disposal is the attribute IsExcludedFromServingHierarchy. This attribute can be set to $False (default) or $True using the Set-Mailbox cmdlet and will prevent this mailbox from being used by Autodiscover or OWA to serve hierarchy connections to clients. Even after IsHierarchyReady becomes automatically set to $True the public folder mailbox will be excluded from Autodiscover and OWA hierarchy usage if IsExcludedFromServingHierarchy is set to $True. This option is useful when you want a public folder mailbox utilized only for content of public folders and can be set immediately after the public folder mailbox is created. Note: If DefaultPublicFolderMailbox is populated on a user mailbox it will override the $True value of IsExcludedFromServingHierarchy (on the mailbox they are connecting) and will allow that user to connect to the public folder mailbox specified in DefaultPublicFolderMailbox for hierarchy. We will discuss this later in the post. Scenario 1: What happens when default public folder mailbox value has not been set on the user mailbox? Let's say we have 50 public folder mailboxes in their default state, all of which have sync’d hierarchy, and there are 20,000 users who try to access public folders. The Autodiscover service will provide a random public folder mailbox to each user to service hierarchy requests. The specific public folder mailbox being returned to the client can change randomly, or if Outlook is closed and reopened, based on what GetPublicFolderRecipient function returns to Autodiscover which in turn returns the data to the client. On the client side, public folder mailboxes being accessed will appear under the connection status in Outlook, as shown below: The first public folder GUID value in here is a random primary public folder hierarchy mailbox. The remaining public folders GUIDs are populated and returned when user tries to access individual public folders which reside within those public folder mailboxes. Scenario 2: Restricting clients to contact a specific public folder mailbox for hierarchy. It is possible to override the behavior of random selection of default public folder hierarchy mailbox. First, you need to confirm that the public folder mailbox has IsHierarchyReady populated with value of $True which confirms that it has completed its initial full hierarchy sync with the primary public folder mailbox. Run the command: Get-Mailbox -PublicFolder "Public Folder mailbox name" | FL Name,ExchangeGuid,*Hierarchy* Once the above is confirmed, the next step is to assign the desired public folder mailbox as the DefaultPublicFolderMailbox on the user mailbox. In our example this would be accomplished by running the below command: Set-Mailbox "user mailbox identiy" -DefaultPublicFolderMailbox "PF-MBX-002" -Verbose Now, when the client opens Outlook, Autodiscover provides the DefaultPublicFolderMailbox’s SMTP address in the <PublicFolderInformation> XML element and the client then performs a second Autodiscover request to learn how to connect to this public folder mailbox. When we check the Outlook connection status, it will list the assigned public folder mailbox. Scenario 3: Setting the default public folder mailboxes close to the user's location for better client experience Our customers deal with communication links that may be highly latent and expectedly inoperable for periods of time. For demonstration purposes let’s consider our favorite company called Contoso which has the following configuration: Three company sites are listed below: Hyderabad: India with 23,000 mailboxes. Local servers have 10Gbps LAN connectivity and there are three public folder mailboxes in this site. Adelaide: Australia with 5,000 mailboxes. Local servers have 10Gbps LAN connectivity and there is one public folder mailbox in this site. A cruise ship with 500 mailboxes for employees on-board. Local server has 1Gbps LAN connectivity and there is one public folder mailbox per ship. To maintain communications with their ships at sea, Contoso has established a 384Kbps satellite link to each ship and has also installed a dish at their Hyderabad site. All network routes to/from ships are routed via Hyderabad’s own satellite link. Contoso has also purchased 1Gbps of bandwidth on a submarine cable link to Australia. All WAN routes to/from Australia site traverse this link. Challenges with the above deployment If we were to simply deploy modern public folders with all the default values, we could see some interesting things happen. For example, all users could be offered any of the five public folder mailboxes for hierarchy regardless of their location. The worst-case scenario here would be a ship based user trying to access PFMBX-AUS-001 for hierarchy information or an Adelaide based user trying to access PFMBX-SHIP-001 for hierarchy information. In either of those scenarios we see client traffic traversing round-trip not only along the longest network path, but also the one with the least bandwidth and most latency. In this extreme example, you would more than likely have clients calling the help desk reporting the Outlook RPC popup after they attempted to expand the public folder tree. Considering the above problems, we recommend administrators with similar decentralized and complex network topologies consider configuring user mailboxes to access public folder mailboxes located within local or geographically closer sites as their default public folder mailbox. What would work better? In an environment like the one in the example, it may make sense to set IsExcludedFromServingHierarchy to $True for the public folder mailboxes on all cruise ships and Australia. This will remove them from being returned as valid public folder hierarchy endpoints for Autodiscover and OWA, leaving only the three well-connected Hyderabad based public folder mailboxes setup for automatic discovery. Additionally, the DefaultPublicFolderMailbox attribute at the mailbox level should be utilized for employees based on the cruise ship or the Australia continent to ensure they always attempt to connect to a public folder mailbox that makes sense based on their geolocation. One caveat here is if a user from the Australia office were to visit the cruise ship (for work purposes sadly and not fun in the sun!), their client would continue to connect back to Australia for their PF hierarchy connection. In addition, the Hyderabad office with 23,000 mailboxes would need to monitor user concurrency to determine if they need additional public folder mailboxes or not over time to stay within supported user concurrency limits. Things to remember and plan during the deployment: Understand the company topology completely before making any decision to deploy public folders for offices located in different geographical locations. Correct deployment of public folders using the recommended approach will make life easier for administrators and end users. Make use of the attribute IsExcludedFromServinginHierarchy by setting it to $True when it makes sense to exclude public folder mailboxes from being discovered by Autodiscover for providing hierarchy information to clients and avoid any unwanted connections. The DefaultPublicFolderMailbox attribute at the mailbox level should be considered when you need to ensure the users in less-connected sites must connect to public folder mailboxes close to their geographical location for hierarchy information. Misconfiguration can lead to serious issues such as latency in accessing public folder information and poor end user experience. No more than 2000 active connections being made to the same public folder mailbox at any point of time are currently supported. This will require advanced planning, to ensure that the public folders being heavily used by the clients are being distributed across the public folder mailboxes, which are in turn close to the user's geographical site for better access and experience if necessary. You can add one or more additional Hierarchy Only Secondary Public Folder Mailboxes (HOSPFM) and Content Only Secondary Public Folder Mailboxes (COSPFM) depending upon the geographical location and identification of commonly used public folder mailboxes by the end users for better end user experience. Yeah, we like our acronyms. Yup, we just made that up. What are those (HOSPFM) and (COSPFM), and why do we require them? Hierarchy Only Secondary Public Folder Mailbox (HOSPFM): does not contain public folder content and only serves hierarchy. IsExcludedFromServingHierarchy is set to False Content Only Secondary Public Folder Mailbox (COSPFM): has public folder content in it but is excluded from serving hierarchy. IsExcludedFromServingHierarchy is set to True There are 2 type of connections being made to the public folder mailbox, when it is accessed. Hierarchy connection Content connection Public folder mailbox connections (both hierarchy and content) should not exceed 2000 to remain within the support boundaries. Given this requirement, you should plan to have enough public folder mailboxes serve hierarchy and/or content so that you maintain a level of less than 2000 active and simultaneous client connections per secondary public folder mailbox. The primary public folder mailbox must be excluded from providing hierarchy to clients (IsExcludedFromServingHierarchy parameter set to True). This allows the primary public folder mailbox to spend its time maintaining the hierarchy and dealing with hierarchy replication tasks. Overloading this public folder mailbox with client connections can in turn lead to performance and reliability issues with your PF hierarchy. How to move the public folder data close to the user’s geographical location Consider another company also called Contoso, which has many offices around the world and modern public folders have been deployed in the environment. Sarah is a user whose mailbox is in a datacenter which is in India and she frequently works with public folders. There is another large group of users who also frequently work with the same set of public folders, but they are in a different geographical location, in Australia. The public folders being accessed are in the India site, close to Sara's geographical location, so she has a better experience when accessing public folders. In contrast, when Australia users try to connect to public folder mailboxes, the local hierarchy public folder mailbox in their datacenter will provide the content location for required public folders. Users will initiate a connection to the actual public folder located in India holding the content for the public folder. Since the actual folder content is in different geographical location, the connection request may be not as performant for the Australian group of users, resulting in user frustration. This deployment is not recommended. The focus should be on identifying the most frequently used public folders by a common set of users, and moving the public folders with that content closer to users’ location. In this scenario, the content should be moved closer to the larger group of users in Australia. Note: Moving public folders only moves the physical contents of the public folder; it doesn't change the logical hierarchy, or layout of folders in the folder tree. To move the public folder content, run the command: New-PublicFolderMoveRequest -Folders "\path of the public folder to be moved" -TargetMailbox "target public folder mailbox name" Note: To verify that the PublicFolderMoveRequest is complete, the command Get-PublicFolderMoveRequest can be run. Like mailbox move requests, completed public folder move request must be removed before any other public folder can be moved to another public folder mailbox. To do this, run the command Remove-PublicFolderMoveRequest. If any other public folder move request is initiated without removing the old request, it will error out like this: To remove the existing PublicFolderMoveRequest, run the command: Get-PublicFolderMoveRequest | Remove-PublicFolderMoveRequest Note: If a parent public folder and its subfolders need to be moved to another public folder mailbox, this can be done using Move-PublicFolderBranch.ps1 script, located in \scripts folder. For more details, see: Move a public folder to a different public folder mailbox Once the public folder content has been moved to a different public folder mailbox, users in Australia site accessing the public folder will be updated by the local public folder mailbox hierarchy connection to the folder’s new content location and connect to the local public folder mailbox. To continue with our example, Sarah will continue to connect to local public folder mailbox for hierarchy (which has been set by the administrator), but will then get her content from the Australia datacenter. Even though the experience may not be as great for that one user, Sarah can add frequently used public folders to Favorites using Outlook client or OWA to help with latency issues. Looking at the above example, it becomes very important to determine network latency and bandwidth before you start deployment of public folder mailboxes in geographically dispersed environment to avoid any latency issues when accessed by end users. In such situations, the recommendation will be to use tools like Netmon which can help in determining the connections happening to public folder mailboxes. There is a great tool written by Mark Russinovich called Psping, which can be helpful in determining the round-trip latency. Based on the results customers can decide whether the current network is suitable for their environment or if there are any changes that needs to be done. Summary: deployment considerations Considerations when deploying public folder mailboxes in the organization, to ensure they are protected and readily available to the clients: Public folder mailboxes, both hierarchy and content, should be protected by placing them in databases protected by multiple copies in a Database Availability Group. By doing this, mailboxes will remain protected in case of any outage and be available to end users. There should be no public folders hosted within the primary public folder mailbox. This way we dedicate the primary public folder mailbox to specifically focus on its job of replicating hierarchy changes to other public folder mailboxes. You should exclude the primary public folder mailbox from serving hierarchy to clients. This is done by setting the IsExcludedFromServingHierachy to $True. The recommendation is to activate database copies hosting public folder mailboxes on mailbox servers which are geographically located close to the client location. The general recommendation is having one public folder hierarchy mailbox for every 2000 users accessing public folders. Additional hierarchy only public folder mailboxes can always be created to divide the connection load among users to ensure that the 2000 connection limit is not reached. Plan and create more secondary hierarchy public folder mailboxes and content mailboxes to ensure there are fewer than 2000 active and concurrent connections to public folders and that they are close to the users geographical location to ensure there are no latency issues and users have good experience. Since Exchange Server 2016 CU3 has released, you can make use of up to 1,000 public folder mailboxes. Of the 1,000 public folder mailboxes, 100 of them can be used for hierarchy (or 99 once you exclude the primary PF) and the remaining 900 can be used for content storage. Post summary In the above post, we have provided information on how public folder mailboxes are accessed, and the importance of correctly deploying them in a geographically dispersed environment. In upcoming posts, we will discuss topics related to public folder logging analysis, management and quota related information We would like to say thanks to Public Folders Feature Crew team for their valuable inputs while this blog post was being written. Special Thanks to Ross Smith IV, Nasir Ali, Scott Oseychik for reviewing this content and validating the guidance mentioned in the blog post and Charlotte Raymundo, Nino Bilic for helping us to get this blog post ready! Siddhesh Dalvi and Brian Day57KViews1like22Comments