User Profile
LainRobertson
Silver Contributor
Joined Apr 05, 2019
User Widgets
Recent Discussions
OAB download fails after hybrid mailbox move.
Hi folks, I'm posting this query here as I doubt anyone in the Outlook forums would have the necessary Exchange hybrid knowledge. I run a classic hybrid Exchange environment where Exchange Server 2019 CU15 is the on-premise platform. Authentication is provided by on-premise AD FS, with the accounts being synchronised from on-premise via AAD Connect. I've just moved my on-premise mailbox to Exchange Online via New-MoveRequest and for the most part, everything is fine. One thing that possibly isn't fine - going off the Bits-Client event log is the regular offline address book downloads, where I'm seeing regular failures in the event log and through double-checking with bitsadmin.exe. The initial address book synchronisation worked as the view in Outlook is fully populated, however, I expect that future changes likely won't come through. bitsadmin output Event log output (There's numerous events to choose from - this is the one I'm most curious about.) The BITS service provided job credentials in response to the UNIDENTIFIED authentication challenge from the outlook.office365.com server for the Microsoft Outlook Offline Address Book <guid> transfer job that is associated with the following URL: /OAB/<guid>/oab.xml. The credentials for the <sid> user were rejected. When the mailbox was on-premise, the OAB came from the Exchange Server - no surprise there, where post migration it can be seen from the bitsadmin output it now comes from outlook.office365.com. Perhaps that's also to be expected - I don't know, but it makes sense given the move. What alerted me to there potentially being an issue is the systray icon frequently gets stuck on the "synchronising" icon, and running a manual full OAB sync from within Outlook fails to complete. After an extended "hang" period, the sync window eventually times out with the error shown above (the protracted UI behaviour would appear to be due to the large number of retries). Dropping the BITS job URL into Edge simply returns a HTTP 503, which doesn't necessarily strike me as a problem. After all, I'm unable to provide a BEARER token using this method. I haven't yet tried via PowerShell as it only occurred to me now but perhaps I'll do so after posting this. Searching on this error and scenario has turned up nothing useful. I have also checked and compared event log entries from an Azure AD-native account, where it's a mixed bag of successful OAB BITS downloads and unsuccessful ones that feature the same symptoms as above, which offers up the possibility this might be a transient service-side error (though I'm not leaning heavily towards this). Has anyone else encountered this issue and resolved it? Is it even an issue to begin with, or is this expected behaviour? I'm unsure what to make of the symptoms. Cheers, Lain114Views0likes1CommentRe: Restore Azure Cosmos DB
Hi aurora95, It depends on which continuous back-up tier was chosen when the account was purchased/created. I suspect if you cannot see it, then it was purchased under the 7-day offer, in which case no, you cannot recover it. Here's the relevant documentation which you can use to double-check since we can't answer this for you: Continuous Backup with Point-in-Time Restore - Azure Cosmos DB | Microsoft Learn Restore an Account That Uses Continuous Backup Mode - Azure Cosmos DB | Microsoft Learn Cheers, Lain19Views0likes0CommentsRe: stop expanding parentheses in a variable
Hi Jhult1170 , I do not have a command called "box", so I can't test your exact use case, however, it's worth noting that your manual example line of: $BoxLOWGRPRaw="$( Box groups:create --json "Test-Group(Name)" )" Is not what your first three lines would be producing. Rather, based on your initial three lines, the proper manual test case would look like: $BoxLOWGRPRaw = "$( Box groups:create --json Test-Group(Name)_Owner )" This is because you have used the $BoxClientOwnerName variable in your third line rather than the $ClientName variable. Cheers, Lain12Views0likes0CommentsRe: extract a string from this @{Name=WEBHOST001-OI3w}
Hi rmerritt, Looking at your second post, it's just a string variable. It's purely coincidental that the value of the string happens to look similar to a valid PowerShell [hashtable] declaration. The reason it's not a valid PowerShell [hashtable] declaration is the assigned value of "WEBHOST001-OI3w" is not surrounded by quotes (single or double will suffice for this example). If it were surrounded by quotes, you could call Invoke-Expression to treat the string as a parsable command as shown below in the first two examples (the first example shows the conversion to a [hashtable] while the second goes one step further to list the value of the "Name" key): The third example from above (noting your value underlined in red which has no quotes) is how you can extract the string using string manipulation. But this is specific to the format of the value held in your $vcentername variable (which per your second post shows it to be a [string]) and won't work for different formats. If you apply the third example to your variable, you get this as the code line you want to use to extract just the host name: $vcentername.Split("=")[1].Replace("}", ""); Cheers, Lain7Views0likes0CommentsRe: Unable to manage DFS namespace from DFS MMC
Hi DBY2025, Your screenshot is odd since the value of "\\4rail.mid\4rail" is showing up in the description bar. It ought to be the same value as the namespace. As to why you're only encountering this now after years of it being fine, my guess would be that you recently decommissioned something relating to 4rail.mid. What do you get if you run the following two commands from PowerShell? # Get the DFS root. Get-DfsnRoot -Path "\\man.mid\gem" | Format-List -Property NamespacePath, Path; # Get each DFS target. Get-DfsnRoot -Path "\\man.mid\gem" | Get-DfsnRootTarget | Format-List -Property NamespacePath, Path, TargetPath; Example output Repeating what I said at the start, the value for NamespacePath and Path ought to be the same, but I anticipate they're not. There's no supported way to change these values either, meaning if they don't match, you would have to recreate the namespace. Cheers, Lain21Views0likes0CommentsRe: Add members to a dynamic sec-grp excluding users with a specific "serviceplanid" assigned license
Hi Az_Iz, The problem with your rule is it's going to almost always include all people, since their account will have more than a single enabled service. When it comes to matching a single entry in a list of entries for the purpose of exclusion, you need to perform a negative match rather than a positive match. You can do this two ways: Use the "-any" operator in conjunction with the not() function; Use the "-all" operator rather than the "-any" operator. Using -any plus not() New-MgBetaGroup -DisplayName "Forum Test" -MailEnabled:$false -SecurityEnabled:$true -MailNickname "ForumTest" -GroupTypes "DynamicMembership" -MembershipRule "(user.accountEnabled -eq true) -and (user.mail -ne null) -and not((user.assignedPlans -any ((assignedPlan.servicePlanId -eq `"43de0ff5-c92c-492b-9116-175376d08c38`") -and (assignedPlan.capabilityStatus -eq `"Enabled`"))))" -MembershipRuleProcessingState "On"; Example output Using -all instead of -any New-MgBetaGroup -DisplayName "Forum Test 2" -MailEnabled:$false -SecurityEnabled:$true -MailNickname "ForumTest2" -GroupTypes "DynamicMembership" -MembershipRule "(user.accountEnabled -eq true) -and (user.mail -ne null) -and (user.assignedPlans -all ((assignedPlan.servicePlanId -ne `"43de0ff5-c92c-492b-9116-175376d08c38`") -or (assignedPlan.capabilityStatus -ne `"Enabled`")))" -MembershipRuleProcessingState "On"; Example output Cheers, Lain17Views0likes0CommentsRe: Hyper-v Replica Traffic segregation
Hi StefanoC66, I'd expect you're running into Kerberos authentication issues since there's no service principal name registrations (contained in the servicePrincipalName attribute) matching your CNAME on the Active Directory computer objects belonging to the hypervisor hosts. What I wouldn't expect it to relate to is the self-signed certificate, though if you're getting an untrusted root error, you can easily get around that by putting the partner's certificate (without the private key - so just the CER file type) in the other partner's trusted root authority store. Taking one of my Hyper-V hosts as an example, you can see that by default the hosts registers a number of services under the NetBIOS and DNS names of the actual host: However, if I knock up a CNAME - doesn't matter where (such as in a DNS zone or in the hosts file as you've done), then I'd have to go and add an entry for each relevant service to the computer object's servicePrincipalName attribute. I haven't bothered using CNAME records for Hyper-V hosts before so I can't authoritatively say which ones are required for your exact scenario, but if I had to guess, I'd work through the services in this order: Hyper-V Replica Service; Microsoft Virtual System Migration Service; WSMAN; HOST; Microsoft Virtual Console Service (I can't see this being needed but I've listed it for completeness). Do not add entries for the RestrictedKrbHost or TERMSRV services. As an example, if I wanted to register a CNAME of hv1.repl.company.com for the Hyper-V Replica Service then my list would grow from: To: Naturally, you'd need to do this for each required service. Once that's done, you'd likely need to restart the Hyper-V host for the changes to take effect. In any case, this is what's required to get Kerberos working with the CNAME. If Kerberos isn't what's holding you back then you may still run into this as an issue later, but for now, this would be my first item to check based on your description. Cheers, Lain0Views0likes0CommentsRe: New-MgBookingBusinessService | Turn Customer Information Questions Off
Hi AP_TC_ECASD, I don't use these APIs, but looking at the documentation, Graph shows that bookingBusiness and bookingCustomQuestion are two separate endpoints. The bookingBusiness endpoint documentation even lists customQuestions as a relationship (read-only, at that) rather than a property. What this collectively points to is that you cannot create the businessBooking entity and the bookingCustomQuestion with a single call to New-MgBookingBusinessService. Rather, you will need to: Call New-MgBookingBusiness first; Call New-MgBookingBusinessCustomQuestion afterwards, using the "id" obtained from the above commandlet. You'll need to call this once per question as it doesn't take multiple questions as input. References: New-MgBookingBusiness (Microsoft.Graph.Bookings) | Microsoft Learn New-MgBookingBusinessCustomQuestion (Microsoft.Graph.Bookings) | Microsoft Learn bookingBusiness resource type - Microsoft Graph v1.0 | Microsoft Learn bookingCustomQuestion resource type - Microsoft Graph v1.0 | Microsoft Learn Create bookingCustomQuestion - Microsoft Graph v1.0 | Microsoft Learn Cheers, Lain33Views0likes1CommentRe: Creating Claims Mapping Policy in Entra ID
Hi MustangProgrammer, The format of your claim is incorrect, which based on version 2.30.0 of the commandlet does indeed show up in the error: The specific issue is you haven't provided the key-value pair correctly, where it is supposed to be in the format of "ID":"attributeName". Here's the correct format: $params = @{ definition = @( '{"ClaimsMappingPolicy":{"Version":1,"IncludeBasicClaimSet":"false","ClaimsSchema":[{"Source":"user","ID":"onpremisesssamaccountname","SamlClaimType": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name"}]}}' ) displayName = "ClaimTest" } Which is then accepted by Graph, as demonstrated below: Cheers, Lain12Views0likes0CommentsRe: Meeting rooms - end of synchronization with AD
Hi Gepard, Conceptually, there's three options: End synchronisation from AD to Entra ID (formerly Azure AD); De-scope or delete the AD-side object, then recover it from the Entra ID recycle bin; Create a new Entra ID-native replacement mailbox and migrate all content to it. 1. End synchronisation from AD to Entra ID Do not do this unless your organisation is ready to cease all synchronisation from AD to Entra ID. My assumption from how you've written the question is that you are not yet ready for this and it's not a process you simply play with through a dry run. I'm only mentioning it for completeness, where you can read more about the process itself here: Turn off directory synchronization for Microsoft 365 - Microsoft 365 Enterprise | Microsoft Learn Technically, you can disable AD synchronisation, mess around with AD objects to achieve your objective and then re-enable AD synchronisation again, however, I wouldn't recommend this for most scenarios or where you're not familiar with how the underlying synchronisation engine works. 2. De-scope or delete the AD-side object, then recover it from the Entra ID recycle bin This is the option I'd recommend for your specific scenario. De-scoping refers to the process of moving the AD object to an organisational unit that is not selected for synchronisation to Entra ID (which is configured in Entra ID Connect or Cloud Sync). Deleting the AD object needs no explanation as it's exactly what it sounds like. De-scoping is the more convenient option out of the two since it makes rolling back the change a little easier but ultimately they result in the same impact on the Entra ID account, which is that it is soft-deleted. Once Entra ID Connect (or Cloud Sync if you're using that) soft-deletes the account(s), you can then restore them using the Microsoft-documented process, where the end result is that the restored account is converted to being a cloud-native account and no longer has a relationship to the de-scoped/deleted AD account: Restore or permanently remove recently deleted user - Microsoft Entra | Microsoft Learn The generic downside to this process is that it triggers licencing plan deprovisioning (when it's soft-deleted) and then provisioning (when it's restored). For important user/service/application accounts, this may require extra care in planning the execution and timing, but it's unlikely a room (or any other kind of resource mailbox) will share any of these sensitivities making it a safe, convenient option to leverage. It's worth noting this process can be fully scripted should you have many mailboxes to "convert". But initially - as the proof of concept, you'd validate the process manually using some low-risk resource mailboxes. 3. Create a new Entra ID-native replacement mailbox and migrate all content to it I wouldn't recommend this option as it's actually the most complex and most likely to result in missed steps. Conceptually, it's easy enough: Create a new Entra ID-native account and enable it in Exchange as a room resource mailbox; Migrate all content and settings from the original room mailbox to this new mailbox; De-scope of delete the original mailbox from AD. The second point is the one where gaps will potentially creep in and why I wouldn't normally recommend it over the de-scope/delete option above (option 2). Cheers, Lain1View1like0CommentsRe: Why does this return a .csv with the length of the group names?
Hi kcelmer, There's two things going on here: An automatic type conversion; Using Export-Csv properly. The first point isn't something you'd be aware of unless you've been curious enough to dig into how PowerShell works, so you can't beat yourself up over this one. The short explanation is that PowerShell will attempt to make your life easier by attempting to convert different types in the background. You can see this in action in the following examples, where PowerShell attempts to cast the right-side value to the type of the left: PowerShell works well with the HashTable data type, but the AdditionalProperties is a dictionary type - which looks similar to the eye but can result in pipeline conversion issues under the hood, which is the reason you're seeing lengths rather than strings. Secondly, with your example commands, you're attempting to send the plain string value to be parsed by Export-Csv, which is expecting to receive header information. It does this by looking for attribute names, which will be missing if you are simply passing in flat string values. We can solve both issues by creating a new PSCustomObject that is in turn sent to Export-Csv, which is an object type the command is quite accustomed to dealing with: (Get-MgBetaUserMemberOf -UserId "lain.robertson [at] robertsonpayne.com").AdditionalProperties.ForEach({ [PSCustomObject] @{ DisplayName = $_["displayName"]; } }) | Export-Csv -NoTypeInformation -Path "D:\Data\Temp\Forum\forum.csv"; This yields the expected CSV content (including a proper column header): You'll get used to PowerShell's automatic type conversion peculiarities the more you tinker with it, but they are very subtle and confusing until you do. Cheers, Lain0Views1like0CommentsRe: API-driven provisioning field mapping changes resynchronize all users and groups
Hi Brian_TheMessiah, The scope of impact is any joined user, which on the Active Directory side can be located anywhere - in our outside of the default creation organisational unit. The default organisational unit is where creations are effected, but if they're then moved elsewhere in the directory outside of that default organisational unit, the synchronisation process still tracks them based on whichever attribute(s) was nominated as the "match objects using this attribute = yes" definition, as shown below: Tutorial - Customize Microsoft Entra attribute mappings in Application Provisioning - Microsoft Entra ID | Microsoft Learn As an aside, this holds true for both users and groups. That's the scope question answered. Moving onto LJohn's second question of what is "changed" in Active Directory, the answer is all attribute mappings where "apply this mapping = always". Conversely, any attribute mapping where "apply this mapping = only during creation" will not be updated. Generally speaking, nothing should change other than the attribute whose mapping you've updated. Just to be clear (I'm probably being overly cautious in making this point), if you update an attribute mapping then that is applied to all joined accounts retrospectively (assuming the provisioning rule has the "update" target objects action setting checked). It isn't the case that the updated rule mapping is only applied to new account creations. This is where the "apply this mapping" setting acts as an important determinant. Cheers, Lain31Views0likes0CommentsRe: Problem restoring deleted user with mggraph
Hi Adamneedshelpwithpowershell, It's not your fault. The example within the Microsoft documentation is incorrect. Where the example lists the "-BodyParameter" parameter, it should have instead used the "-Headers" parameter. Quality assurance for the loss. To provide some tangible evidence, the following screenshot shows the list of parameters for the 2.30.0 version of Restore-MgDirectoryDeletedItem where you can see there is no such parameter as "BodyParameter", but there is one named "Headers". And here is an end-to-end example demonstrating the autoReconcileProxyConflict header from the documentation. The commands - in order - show: A deleted user; Restoring the user whilst using the autoReconcileProxyConflict header; That the user can now be seen as a normal user once more; Deleted the user, sending it to the recycle bin (a soft delete); Deleted the user from the recycle bin (a hard delete). Footnote: My examples use the beta endpoint commands but there's no difference with the v1.0 commands I referenced in my original reply. Cheers, Lain13Views0likes0CommentsRe: Problem restoring deleted user with mggraph
Hi Adamneedshelpwithpowershell, Use the Restore-MgDirectoryDeletedItem commandlet (from the Microsoft.Graph.Identity.DirectoryManagement module) in conjunction with the autoReconcileProxyConflict header: Restore-MgDirectoryDeletedItem (Microsoft.Graph.Identity.DirectoryManagement) | Microsoft Learn Restore deleted directory object item - Microsoft Graph v1.0 | Microsoft Learn Cheers, Lain145Views0likes1CommentRe: Microsoft Entra Connect connecting always to old DC
Hi jpart_777, You're right to not set a preferred domain controller. That should never be used unless it cannot be avoided (which is typically only when someone has botched their Active Directory site topology design and implementation - which is sadly rather common). You ought to be fine with following your hunch and not specifying a preferred domain controller list. I've run a quick test on AAD Connect v2.4.131.0 and it cut over fine when I blocked access from the AAD Connect host to the domain controller it typically connects to. The test was basic but effective and entailed: Configuring the Windows Firewall on the AAD Connect host to block LDAP (and GC) traffic to the usual domain controller; Running a delta import (DI) on the Active Directory connector; Observing the result of the DI run. I actually expected that the DI may not work and an FI might have been required, but I was pleasantly surprised to see that the DI run succeeded. Prior to the blocking of the "usual" domain controller The firewall change to block access to rpdc01.robertsonpayne.com After the firewall change, showing the automatic switch to another domain controller Cheers, Lain10Views1like1CommentRe: Windows Authentication for Entra ID for SQL MI
Hi Zahid_Yaqub, Q: We have to synchronize service accounts and users to Entra IS that are used by applications? A: Yes, you do need to synchronise the on-premise account from Active Directory to Entra ID. Q: Does the client (running application to SQL management studio) require access to Entra ID...? A: I'm unclear on what you mean by "client". Do you mean the user launching SSMS? If so, then: If the user wishes to log onto SQL MI using SSO based on Windows Authentication (as shown below) - or Entra ID Integrated, then yes, their account needs to be synchronised to Entra ID. This remains true for all on-premise accounts looking to access SQL MI - service accounts, application accounts, etc. If you are looking to migrate databases from on-premise SQL Server to Azure SQL MI, you will need to plan for recreating/altering the existing on-premise identities to their Entra ID synchronised representations. The reason for this is that it's not actually your Active Directory account logging onto SQL MI. Here's a loose description of what happens: You are logged onto your domain- or hybrid-joined computer with your Active Directory account; You launch SSMS, choose Windows Authentication and connect to the Azure SQL MI; Under the hood, Windows requests a Kerberos ticket from Entra ID, where that ticket is actually aligned to your Entra ID account (which is why your account has to be synchronised to Entra ID); That ticket is presented to Azure SQL MI. Again, as I mentioned, the process is the same for any Active Directory account accessing SQL MI. This is why: The account must be synchronised from Active Directory to Entra ID; and The synchronised Entra ID account must have access to the SQL MI instance (as a login, user or most likely both - depending on whether or not the database is contained). Cheers, Lain16Views0likes1CommentRe: Entra ID User Properties - Dynamic Groups
Hi adm_lawrimore, Anecdotally, nothing's likely to change any time soon as the dynamic rule builder engine hasn't changed in many years in the context of leveraging more existing attributes or adding new ones for customer data. Additionally, nothing's been announced that specifically relates to the rule engine, either - at least not where I'd expect it to be, which is here: What's new in Microsoft Graph - Microsoft Graph | Microsoft Learn You could look at using directory extensions but that's not really improving your position if you're already getting by using the extensionAttribute1-15 set. It's worth noting that extensionAttribute1-15 are also natively part of Azure AD, meaning you don't lose them if you cut over to being Azure-native. Cheers, Lain45Views0likes0CommentsRe: Copy text always as text from Onenote
Hi TF25, PowerShell can't help you here. Applications have their own direct clipboard implementation which outside processes can't generally interfere with. Microsoft's own advice for application developers is that if something is copied to the clipboard, it should be done so using as many different formats as possible (and practical). In OneNote's case, it's using at least two methods of storage: Rich text/HTML; Image. When you subsequently choose to paste into another application, that application doesn't speak to OneNote. All it sees it content on the clipboard, and if there's multiple presentations available, it's free to choose using any prioritisation is pleases which format(s) it's going to pull down. This is why you see many varied paste options in applications like Excel, Word, etc. When it comes to pasting in a browser like Edge, there's an additional level of control, which is on the visual control itself. For example, a textbox control will only be interested in plain text content on the clipboard; and image control may only be interested in an image; while other controls will let you drop both, at which point it'll decide which it prefers the most - which is why you may get an image by default but then text if you "control" the choice through using Ctrl+Shift+V. Anyhow, that background information is outside the scope of PowerShell, which unfortunately is of no assistance in this application-to-clipboard relationship. The most you can hope for is that the applications offer some control over their own copying and pasting behaviours, where for OneNote, it's limited to what you can find listed in the keyboard shortcut reference: Keyboard shortcuts in OneNote - Microsoft Support Cheers, Lain12Views0likes0Comments
Recent Blog Articles
No content to show