mw365
5 TopicsLarge Mailbox Migration to Exchange Online
Migrating large mailboxes is challenging for enterprise Exchange teams, especially when mailboxes are over 100 GB or contain extensive recoverable items. Using Exchange Messaging Records Management (MRM) to reduce mailbox size before migration can speed up moves to Exchange Online. Why Use MRM Before a Large Mailbox Migration? Many organizations place mailboxes on litigation hold or in-place hold, causing the recoverable items in these mailboxes to grow significantly, often exceeding the 100 GB quota in Exchange Online. Quota adjustments can be requested, allowing up to about 240 GB for the combined size of the primary mailbox and recoverable items. Still, it's common for recoverable items alone to surpass this limit. MRM lets you move content from the primary mailbox to an archive mailbox, reducing the primary's overall size. The archive mailbox may be hosted on-premises or in Exchange Online. Setting up the archive in Exchange Online is usually simpler, reducing the need for additional mailbox migrations. Occasionally, this process can result in the archive mailbox's recoverable items exceeding the 240 GB cap. Therefore, creating the archive in Exchange Online remains the most efficient solution. Prerequisites Archive mailbox created in Exchange Online The archive mailbox must have the correct routing domain configured as the ArchiveDomain value OAuth enabled in Exchange AutoExpandingArchiveEnabled must be enabled for either mailbox or entire organization MRM Configuration The required retention policy tag is dependent upon where the data is located within the mailbox. Our primary focus is on recoverable items for mailboxes on holds; therefore, we need to create a tag to move recoverable items older than x number of days to archive. New-RetentionPolicyTag -Name RecoverableItems_31_MoveToArchive -MessageClass * -RetentionAction MoveToArchive -AgeLimitForRetention 31.0:0:0 -Type RecoverableItems -RetentionEnabled:$True -Comment "Archive all items from the Recoverable Items over 31 days" This tag must be added to a retention policy, and the retention policy must be assigned to the user being migrated. Once this is done, you can start the managed folder assistant (MFA) to move items into the remote archive. Start-ManagedFolderAssistant user@contoso.com Note: A new retention policy may need to be created specifically for these larger mailboxes. Speed up expanded archives One issue with migrating large mailboxes is the delay caused by auto-expanding archives. Thankfully, this delay depends on Exchange processes, which we can observe and activate manually when needed. The first thing to do is keep an eye on your archive mailbox size. Once it hits 90GB, auto-expansion should kick in. To track this, check the mailbox statistics for the archive mailbox. Get-MailboxStatistics <guid of MainArchive shard of MailUser> | fl *itemCount,*ItemSize AssociatedItemCount 6 DeletedItemCount 290041 ItemCount 2 TotalDeletedItemSize 100 GB (107,374,646,793 bytes) TotalItemSize 557.2 MB (584,222,341 bytes) The results indicate that the TotalDeletedSize has reached 100GB, which is the established quota limit. At this threshold, the auxiliary archive should trigger the next time the managed folder assistant (MFA) runs against the mailbox. Manually start the MFA to expedite this process: Start-ManagedFolderAssistant <guid of MainArchive shard of MailUser> Confirm MFA has completed by checking the ELCLastSuccessTimestamp: (Export-MailboxDiagnosticLogs -Identity <guid of MainArchive shard of MailUser> -ExtendedProperties).mailboxlog | Select-Xml -XPath "//MailboxTable/*" | select -ExpandProperty Node | ? {$_.name -like "ELC*"} Once the auxiliary archive becomes available, Exchange will initiate the process of copying data into the new mailbox. The MFA must be triggered again to start copying data. Then we can proceed to verify whether any folders have been ghosted using the following steps: $folders = Get-MailboxFolderStatistics -FolderScope recoverableitems <guid of MainArchive shard of MailUser> $folders | ?{-Not $_.ContentFolder -and $_.VisibleItemsInFolder} | Sort-Object LastMovedTimeStamp | ft FolderSize,LastMoved*,Content* FolderSize LastMovedTimeStamp ContentFolder ContentMailboxGuid 17.79 GB 11/28/2024 10:25:07 PM False GUID of Aux archive 12.95 GB 11/28/2024 10:25:07 PM False GUID of Aux archive 1.371 MB 11/28/2024 10:25:07 PM False GUID of Aux archive 11.14 GB 11/28/2024 10:25:07 PM False GUID of Aux archive These folders have been copied to an auxiliary archive but are not yet expired on the MainArchive, leaving about 43GB of storage pending release. MFA will free this space after its next run, once five days have passed since "11/28/2024 10:25:07 PM". Our monitoring speeds up the process since MFA may take several days to finish. After five days from the LastMovedTimeStamp, we manually start the MFA using the following command: Start-ManagedFolderAssistant <guid of MainArchive shard of MailUser> You will notice these folders shrinking and the primary archive gaining free space. If there are no ghosted folders and the mailbox is full or exceeds 90GB of recoverable items, start MFA to trigger expansion. It may help to run MFA more than once and confirm that it completed successfully. Conclusion Using Messaging Records Management (MRM) ahead of a large mailbox migration helps reduce primary mailbox and recoverable items pressure by moving older content into the archive, improving the likelihood of staying within Exchange Online limits and accelerating move performance. With the right prerequisites in place, you can actively monitor archive growth and expansion. When the archive approaches capacity or when ghosted folders are older than five days, targeted monitoring and triggering MFA against a mailbox can accelerate expansion and free space sooner—keeping migrations on track. Use MRM to move Recoverable Items older than your chosen threshold into the archive before starting migrations. Track archive statistics (especially TotalDeletedItemSize/TotalDeletedSize) to anticipate auto-expansion and identify bottlenecks. Monitor ghosted folders and run MFA after the relevant LastMovedTimeStamp interval to accelerate cleanup.133Views0likes0CommentsOptimizing Exchange Online PowerShell
The Exchange Online PowerShell module is a powerful tool. As environments scale and tasks grow in complexity, performance and reliability become critical. This post takes a holistic approach to optimizing Exchange Online management and automation in four parts: Windows PowerShell performance tips Best practices that apply to all M365 PowerShell modules Best practices specific to the Exchange Online PowerShell module The future of automation ================= General Windows PowerShell Performance Tips Seemingly obvious but often overlooked, if you want to get peak performance from any PowerShell module, you need to optimize Windows PowerShell itself. Keep PowerShell Updated: Always use the latest supported version of PowerShell for security, compatibility, and performance improvements. Windows PowerShell 5.1 is preinstalled on the currently supported versions of Windows. Security updates and other patches are included in Windows Updates. For PowerShell 7, follow the steps here. Disable telemetry if not needed by setting the POWERSHELL_TELEMETRY_OPTOUT environment variable: $env:POWERSHELL_TELEMETRY_OPTOUT = "true" ================= Best Practices for all M365 PowerShell Modules These best practices are vital for, but not specific to Exchange Online PowerShell. In other words, although I’ve used Exchange Online cmdlets in the examples provided, all tips in this section apply to other M365-specific modules like SharePoint, Teams, or Security and Compliance PowerShell. Use the latest module version to benefit from performance improvements and bug fixes. For Admins, establish a regular update cadence for all M365 PowerShell modules. Testing new releases on local machines or management servers is ideal for admins, as it offers flexibility and low risk if problems occur. Leverage auto-updates for automation tools, if available. For example, the Managed Dependencies feature for Azure Functions Apps. Use service principal or app-only (sometimes called app-based) authentication for automation to avoid interactive logins and improve script reliability. App-only authentication in Exchange Online PowerShell and Security & Compliance PowerShell The exact name, requirements and config for app-only authentication can differ across other services or even in our documentation, but the use-case and benefits are universal for all M365 services. Script smarter, not harder… Parallel Processing: Leverage ForEach-Object -Parallel (in PowerShell 7+) or background jobs to perform bulk operations faster. Use -ResultSize to return only the necessary data. This is especially beneficial when querying many objects. Get-EXOMailbox -ResultSize 100 This example retrieves only the first 100 mailboxes (rather than default of 1,000), reducing resources and time to execute. Prioritize service-side filtering when available. Not all filters are created equal. Understanding how, or more importantly, where filtering is done when using different methods can have a substantial impact on performance. Experienced PowerShell users know about pipelining with Where-Object to filter data. This is one example of client-side filtering. Most cmdlets available in the various M365 PowerShell modules support the -Filter parameter. This leverages service-side (a.k.a. server-side) filtering. Get-EXOMailbox -Filter "Department -eq 'Sales'" This example limits results to mailboxes for the sales department and leverages service-side filtering to ensure only the data we want is returned to the client. Service-side filtering is much more efficient for several reasons. A deep-technical explanation of this is outside the scope of the current post, so you can take my word for it or seek out more information for yourself. There are plenty of great, easy to find articles across the web on this topic. Following the above recommendations helps ensure that we, the users (and our tools), have a solid foundation for optimal performance. Next, let’s look at ways to ensure we get the best performance out of the Exchange Online module itself. ================= Exchange Online PowerShell (EXO) The Exchange Online PowerShell module (EXO V3+) introduced significant performance improvements, especially around how cmdlet help files are handled. Use the Exchange Online V3 Module: The latest module supports REST-based cmdlets, offering better performance and reliability. How much better and more reliable? I thought you’d never ask… From REST API connections in the EXO V3 module: The following table compares the benefits of REST API cmdlets to unavailable remote PowerShell cmdlets and the exclusive Get-EXO* cmdlets in the EXO V3 module Remote PowerShell cmdlets (deprecated) Get-EXO* cmdlets REST API cmdlets Security Least secure Highly secure Highly secure Performance Low performance High performance Medium performance Reliability Least reliable Highly reliable Highly reliable Functionality All parameters and output properties available Limited parameters and output properties available All parameters and output properties available Follow the guidelines from this doc. Don’t skip this!! Microsoft Tech Community: Reducing Memory Consumption in EXO V3 ================= The Future! Microsoft Graph PowerShell SDK The Microsoft Graph PowerShell SDK is the future of Microsoft 365 automation. It’s modular, cross-platform, and supports modern authentication. Graph can feel overwhelming to those who are comfortable with the current PowerShell modules. If you haven’t started using Graph because you aren’t sure where to start, I recommend you Install the Microsoft Graph PowerShell SDK and check out our aptly named “Getting started” documentation (don’t look at me like that). Better yet, if you’re a Support for Mission Critical customer, ask your Customer Success Account Manager or Customer Solution Lead about the Microsoft-led training options and learn from an expert! If you’re already using the Microsoft Graph PowerShell SDK, great! The tips outlined throughout this post can provide the same benefits with Graph. ================= ✅ Final Thoughts Optimizing PowerShell performance isn’t just about speed – it’s about reliability, scalability, and resource efficiency. Whether you’re using PowerShell for daily management or building and maintaining automation tools for your organization, following these guidelines should have immediate and lasting benefits.842Views0likes4CommentsSharePoint NoAccess Sites: Search Indexing and Copilot Misconceptions Guide
What is NoAccess Mode in SharePoint? NoAccess mode is a site-level setting in SharePoint Online that restricts user access to the site without permanently deleting it. Think of it as putting the site behind a locked door, the content still exists, but no one can open it. Why Do Organizations Use It? Temporary Lockdown: When a site is under review, being decommissioned, or needs to be secured quickly. Compliance & Security: Helps prevent accidental data exposure during audits or ownership changes. Preserve Data: Unlike deleting a site, NoAccess keeps the content intact for future reference or migration. How Does It Affect Search and Copilot? Search Indexing: By default, NoAccess mode does not remove the site from the search index. This means files may still appear in search results unless additional controls (like Restricted Content Discovery or NoCrawl) are applied. Copilot Behavior: Copilot uses the same index as Microsoft Search. If a site remains indexed, Copilot can surface summaries or references to its content even if users can’t open the files. This is why governance settings like Restricted Access Control or disabling indexing are critical when using Copilot. Why does this happen? NoAccess blocks site access, not indexing. The site remains in the search index unless indexing is explicitly disabled or Restricted Content Discovery (RCD) is enabled. Security trimming still applies. Users will only see items they have direct permissions to (e.g., via shared links). They cannot open anything they don’t have access to. Copilot respects permissions. It uses the same security model as Microsoft Search and Graph, so it never bypasses access controls. Low Priority. Marking a site as NoAccess is a bulk operation that goes into a low priority queue, specifically to avoid system bottlenecks and ensure real-time content changes are prioritized over less critical updates which means it can take much longer than expected for those sites to stop appearing in search results. What are the options to fully hide content? Turn off Allow this site to appear in search results: This setting removes the site from indexing. Note: change the search setting BEFORE setting NoAccess to a site. Enable Restricted Content Discovery (RCD): This hides the site from search and Copilot while keeping it accessible to those with permissions. There is a PowerShell cmdlet available: Set-SPOSite –identity <site-url> -RestrictContentOrgWideSearch $true Please note that for larger sites, both the RCD and no-crawl processes may require a minimum of a week to reflect updates. According to the RCD documentation, sites with more than 500,000 pages could experience update times exceeding one week. What are the options to get Site Crawl information? When setting up the site for NoCrawl, you can run REST to see if the items are returning in search from that site. You can use a simple REST call like: https://contoso.sharepoint.com/_api/search/query?querytext='path:"<siteurl>"'&sourceid='8413cd39-2156-4e00-b54d-11efd9abdb89'&trimduplicates=false. You have to login into the tenant first. An XML object will be generated, please look for <d:TotalRows m:type="Edm.Int32">1</d:TotalRows> you will see the count going down, at some point the count will be equals to 0, that means all items were removed from index. You can use PnP to check the site settings, here an example - Enable/Disable Search Crawling on Sites and Libraries | PnP Samples, remember PnP is open source and it is not supported by Microsoft. Get-PnPSite | Select NoCrawl Key Takeaways Setting a SharePoint site to NoAccess does not automatically remove it from search or Copilot. Copilot and Search always enforce permissions users never see or access unauthorized content. For complete removal, disable site indexing or enable RCD. Monitor index status to confirm content is truly hidden. Understanding and managing these settings ensures secure, seamless experiences with Copilot and Microsoft Search. Helpful Resources Lock and unlock sites - SharePoint in Microsoft 365 | Microsoft Learn Enable/Disable Search Crawling on Sites and Libraries | PnP Samples Restrict discovery of SharePoint sites and content - SharePoint in Microsoft 365 | Microsoft Learn Contributors: Tania MeniceAvoiding Access Errors with SharePoint App-Only Access
To avoid persistent access errors like “403 Forbidden” when using SharePoint Online REST API with app-only permissions, it is essential to authenticate using a self-signed X.509 certificate rather than a client secret, as SharePoint requires certificate-based authentication for app-only access to ensure stronger security and validation. The solution involves generating a certificate, updating the Azure AD app with it, and using it to obtain access tokens, as demonstrated in PnP PowerShell examples and Microsoft documentation.873Views0likes0Comments