storage
1039 TopicsOneDrive Client, Files on Demand and Syncing large libraries
I thought I'd post some observations regarding the OneDrive sync client we've observed that aren't documented anywhere but we needed to figure out when planning a massive move to SharePoint from on-premise file servers: Limits: Microsoft documents that you shouldn't sync more than 300,000 files across all libraries that the client is connected to, but there was no documentation about Files on Demand limits, and we have observed the following: The OneDrive client will fail when the dat file that stores object metadata reaches exactly 2GB in size (%localappdata%\Microsoft\OneDrive\settings\Business1). Now, while Microsoft says you shouldn't sync more than 300,000 files, you can connect using files on demand to libraries that contain more than this. The trick here is that in this case, the total number of files and folders matter, lets call them collectively "objects". (Interestingly, when you first connect to a library and the client says "Process changes" and gives you a count, "changes" is the total number of objects in the library that it's bringing down using files on demand and storing in the dat file.) My suspicion is that since the OneDrive client is still 32bit, it's still subject to certain 32bit process restrictions, but I don't really know. What matters in this case is that up until build 19.033.0218.0009 (19.033.0218.0006 insiders build), the client would fill up the dat file and reach the 2GB limit after about 700-800,000 objects. After build 19.033.0218.0009, it appears that the client has been optimized and no longer needs to store quite as much metadata about each object, "increasing" the upper limit of files on demand. (It seems that in general, each object takes up just over 1KB of data in the dat file, putting the limit somewhere just under 2 million objects). Keep in mind, this is not per library, this is across all libraries, including OneDrive for Business (personal storage), SharePoint Document Libraries, etc. Performance: The client has made some significant improvements in performance quickly as they refine each new build, but there are some things to be aware of before you start connecting clients to large libraries: It. takes. forever. The more objects in a library, the longer it's going to take for the client to build it's local cache of files on demand copies of all the items in the library. It seems that in general, the client can process about 50 objects per second, if you were connecting to a library or multiple libraries that had 1.4 million objects, it will take around 8 hours before the client is "caught up". During the time that the content is being built out locally, Windows processes will also consume a large quantity of system resources. Specifically, explorer.exe and the Search Indexer will consume a lot of CPU and disk as they process the data that the client is building out. The more resources you have, the better this experience will be. On a moderately powered brand new Latitude with an i5, 8GB of Memory and an SSD OS Drive, the machine's CPU was pretty heavily taxed (over 80% CPU) for over 8 hours connecting to libraries with around 1.5 million objects. On a much more powerful PC with an i7 and 16GB of memory, the strain was closer to 30% CPU, which wouldn't cripple an end user while they wait for the client and Windows to finish processing data. But, most organizations don't deploy $2000 computers to everyone, so be mindful when planning your Team-Site automount policies. Restarts can be painful. when the OS boots back up OneDrive has to figure out what changed in the libraries in the cloud and compare that to it's local cache. I've seen this process take anywhere from 15 minutes to over an hour after restarts, depending on how many objects are in the cache. Also, if you're connected to a large number of objects in the local cache, you can expect OneDrive to routinely use about a third of CPU on an i5 processor trying to keep itself up to date. This doesn't appear to interfere with the overall performance of the client, but it's an expensive process. Hopefully over time this will continue to improve, especially as more organizations like mine move massive amounts of data up into SharePoint and retire on premise file servers. If I had to make a design suggestion or two: - If SharePoint could pre-build a generic metadata file that a client could download on first connection, it would significantly reduce the time it takes to set up a client initially. - Roll the Activity Log into an API that would allow the client to poll for changes since the last restart (this could also significantly improve the performance of migration products, as they wouldn't have to scan every object in a library when performing delta syncs, and would reduce the load on Microsoft's API endpoints when organizations perform mass migrations) - Windows to the best of my knowledge doesn't have a mechanism to track changes on disk, i.e. "what recursively changed in this directory tree in the last x timeframe", if it were possible to do this, Windows and SharePoint could eliminate most of the overhead that the OneDrive client has to shoulder on it's own to keep itself up to date. Speaking to OneDrive engineers at Ignite last year, support for larger libraries is high on their radar, and it's apparent in this latest production release that they are keeping their word on prioritizing iterative improvements for large libraries. If you haven't yet started mass data migrations into SharePoint, I can't stress enough the importance of deeply analyzing your data and understanding what people need access to and structuring your libraries and permissions accordingly. We used PowerBI to analyze our file server content and it was an invaluable tool in our planning. Happy to chat with anyone struggling with similar issues and share what we did to resolve them. Happy SharePointing! P.S., shoutout to the OneDrive Product Team, you guys are doing great, love what you've done with the OneDrive client, but for IT Pros struggling with competing product limits and business requirements, documenting behind the scenes technical data and sharing more of the roadmap would be incredibly valuable in helping our companies adopt or plan to adopt OneDrive and SharePoint.74KViews12likes69CommentsWeird serious problem with shared folders in personal Onedrive after windows 10 update.
Hi there, Last week my PC got updates from MS for windows 10 , and after the reboot my Onedrive local client started doing weird things deleting folders and renaming them to names with -COMPUTERNAME behind them. My setup is as follows: I'm logged on to my own Onedrive In my Onedrive I have shared folders with data that other family members in a family subsciption shared with me. There is a lot of data in them, some have 660Gb of pics and videos. I added those shared folders to my Onedrive, so I can access them in my own explorer without logging on to another account. This setup has been working fine for years, up until this MS update... I noticed after the reboot that my Onedrive Personal client was deleting a lot of folders from the local Onedrive folder on my harddrive, but only the folders that have the shared offline data, they were in the recycing BIN. I tried to re-connect the Onedrive but nothing changed, the online view in my Onedrive was still intact , including the shared folders from other Onedrives, but locally these were gone. So at that point I saw a different file structure online compared to the one in my own Explorer. I then tried to restore them from the recycling bin, only to find out that Onedrive would rename them to 'ORIGINALNAME-WIN10' where WIN10 is my computername. It then started uploading all this data AGAIN to my online )storage, the 'shared (Gedeeld) folders were copied into my own drive with the label 'private' behind it, but that is absolutely not what should happen, my own drive would have been full within a day if I didn't break off this operation. Here you can see a screenshot of what was happening, while it was still 'syncing': I tried re-installing and re-connecting the Onedrive client, and also removing all the extra copies of the folders online AND offline, hoping it would resync the whole thing from the Original online shared folders. The shared folders just don't turn up anymore in my local view. Instead the Onedrive client started renaming even more shared folders that seemed untouched before, and uploading the contents to my own Onedrive.... So to make it absolutely clear what happened: the shared folders that were only in the Cloud didn't turn up locally anymore, the shared folders that were on both sides were renamed locally and copied back to the cloud as a new private copy folder (as you can see in the screenshot). Did MS change something with the last updates that forces shared data to actually be taking up space in your own Onedrive, where this was not the case before? Or is this just something that went corrupt on my own PC and can I fix it somehow? I can't imagine them changing something that has such an impact without any warning since it causes a lot of trouble for people using shared folders and also a LOT of network traffic if everyone with shared folders gets into these issues.... I hope some real expert on Onedrive can tell me what is going on, and esp. how to fix it? To be honest, this seems a PRETTY SERIOUS issue if other have it too... ;) Marcel1.1KViews9likes14CommentsFile Explorer slow in Onedrive Folders
Hi all I'm using the latest version of Onedrive together with an updated Windows 11 and FSLogix (latest version) with Office profile disks. When using the file explorer to open folders from onedrive folders in the middle pane, it takes 10-20 seconds till the next window opens. When navigating in the same file explorer window but with the left treeview, everything is fast. When opening a file from within Word, Excel, ... with the popup file explorer, everything is fast. So the issue is only when clicking in the main window of a file explorer. I've done already all the updates, sfc, dism, reinstalled Onedrive, ... The issue is with all users.27KViews8likes24CommentsAzure SQL Database or SQL Managed Instance Database used data space is much larger than expected
In this article we consider the scenario where the used size of an Azure SQL Database or SQL Managed Instance Database is much larger than expected when compared with the actual number of records in the tables and how to resolve it.12KViews8likes0CommentsInclude files in OneDrive sync without copying them
Hi, This may be something already discussed and seems to be in the User Voice forums. I'd like to know if there is a way to include existing folders or files, similar to folder redirection, for files and folders scattered around the computer, so they can be backed up, but without copying them to the OneDrive sync folder. This is pretty fundamental, as when they are copied, they are duplicated, and document versioning issues again come to the fore. This must be the most basic of features that doens't appear to be offered with the OneDrive client. I, and most of my customers, need this functionality and I don't see any way to enable it or to apply a workaround. If this has already been discussed, or if indeed I can achieve this, please let me know. Appreciate the help!Solved115KViews7likes31CommentsMy compliments to One Drive for Business Next Generation Sync Client (ODFB NGSC) development team.
I’m not an early adopter, and held off implementing NGSC for ODFB for a long time. I finally decided to try V 2016 (Build 17.3.6517.0809) two weeks ago, and the deployment and daily operation have been smooth as silk. This isn’t the preview version that syncs Team Sites. Again, I’m not an early adopter. There’s an excellent dynamic response diagnostic that pops-up messages when the sync client chokes on files that have unacceptable characters in their names, or too long folder nest/file names. Since my Tenant Name is long, I was particularly concerned about the latter. The diagnostic messages pop-up and tell you exactly where the problem is, and you can click on a link to take you to the offending file. It even tells you exactly how many characters to remove from the folder nest/file name to resolve a length error. While these pop-ups are waiting for you to correct the error, the sync continues with other folders. After you correct an error, the diagnostic detects the correction and automatically dismisses itself or shortens the pop-up list by removing the corrected file (that’s why I refer to it as “dynamic response”). If it finds another error, it pops-up another notification or expands the current list. Selective folder-sync is fantastic, and solves many problems that previously existed with large libraries in the cloud that had to sync with thin clients in the field. Even if your mobile device isn’t thin, why carry around folders you don’t need and subject them to exposure if you lose your device? NGSC eliminated my concern that “Open with Explorer” is going away in the modern UI, because syncing is so fast and smooth. Anything I want to do with Windows Explorer, I can do with my local ODFB folder. I’m speculating, but it seems like the client and its server counterpart use some type of extremely efficient compression algorithm, along with a quick wake-up call when it’s time to take some sync action. My initial sync of 32,000 files took about 75% less time than I expected, given my available bandwidth. Day-to-day sync operations are much snappier than Groove. Using selective-sync, as long as I remember to put a checkmark next to the folders I’ll need tomorrow, and uncheck the ones I won’t, my thin mobile clients are ready to go much faster than I anticipated. In my opinion, NGSC is ready for primetime use with ODFB, and the MSFT developers deserve kudos.2.9KViews7likes1CommentWorry about Nano Server Future ...
I'm really worry about future of Nano Server. I read this https://blogs.technet.microsoft.com/hybridcloud/2017/06/15/delivering-continuous-innovation-with-windows-server/on technet and I have many doubts about. I think this is a really bad idea "downgrade" Nano Server as a standalone server. There are many escenarios that is perfect, DNS Server, IIS Server, Storage Server, Hyper-V host, Container host, Azure Server, Host Guardian Server, and many many others. Limit Nano Server to a Container is really sad. What about headless is great ! Less surface of attack, more secure ! Small footprint and less services running ! Nano Server is the future of cloud server ! Please reconsider that decission, If we want to use Server Core, Full desktop or Nano Server, let us decide, please. I love Nano Server and I expend many hours testing and making manuals like https://gallery.technet.microsoft.com/Manual-implementacin-paso-af8b12ba. Hope Microsoft Server Team consider this post.Solved3.1KViews6likes2Comments