Dec 27 2023 11:41 PM
I am finding regular discrepancies between the numbers for retention labels that appear in Microsoft Purview's Content Explorer, and the outcome of a Content Search for the same labels, and wonder if others are seeing this too. I'm seeing this in both my Dev tenant and reported to me by several organisations I'm working with.
For example, Content Explorer here tells me there are 138 items, split between Exchange and SharePoint. 'Drilling down' in Exchange, I see that one mailbox allegedly has 9 labels and another has only 1. Where does it get 94 from?
Compare the above with a Content Search, that returns 78 results only in SharePoint, the only place where the label was actually applied, confirmed from the exported results.
Why the difference? Or is the AI/ML-driven Content Explorer making some assumptions that are completely wrong?
Jan 03 2024 08:42 AM
Hello Andrew,
I would also love to know the answer to that, not necessarily for Retention labels but for anything that Content explorer counts.
I see 3 sets of different numbers and no idea which to trust:
-> high level number on the left
-> drilling down on the right u get the number reduced
-> using the new PowerShell command for Content explorer shows yet other numbers.
Microsoft does put some disclaimers when u drill down in the locations:
but i cannot find these explained anywhere in the documentation...
Jan 03 2024 12:02 PM
Thanks Teo
For two of the other classifiers, SITs and trainable, I can kind of understand that the numbers may not be accurate, but with both retention and sensitivity labels, I don't really understand why it doesn't show the actual number assigned.
Jan 04 2024 01:25 AM
Hi Andrew
Unfortunately, I don't have an answer for you but I am experiencing same, and coincidently have an open support ticket with Microsoft to see if they can provide any insight.
If I get anything useful back I'll share it here.
Jan 09 2024 05:44 PM