Recent Discussions
API Query Results Different from Azure Portal
Hello Team, i 'm running a query that i have connect an API with excel. The results for example for a specific user for a specific a day for a conditional access that blocks legacy authentication are more than the results i m getting from azure portal. What results i 'll trust?396Views0likes1CommentCopy data to Oracle destination
We are trying to copy data to an Oracle DWH, and we are facing issue when trying with different setups on the “Write Batch Size” parameter. The copy activity works when we set the “Write Batch Size” to 1, but of course performances are bad, it writes about 10.000 rows in 5 minutes. To speed up copy we are trying to set the parameter to the default value of 10.000 But in this case, copy data fails with the following error: Failure happened on 'Sink' side. ErrorCode=UserErrorOdbcOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=ERROR [HY000] [Microsoft][ODBC Oracle Wire Protocol driver][Oracle]ORA-00604: error occurred at recursive SQL level 1 ORA-01031: insufficient privileges Error in parameter 1.,Source=Microsoft.DataTransfer.ClientLibrary.Odbc.OdbcConnector,''Type=Microsoft.DataTransfer.ClientLibrary.Odbc.Exceptions.OdbcException,Message=ERROR [HY000] [Microsoft][ODBC Oracle Wire Protocol driver][Oracle]ORA-00604: error occurred at recursive SQL level 1 ORA-01031: insufficient privileges Error in parameter 1.,Source=msora28.dll,' So far we have INSERT privileges on Oracle Schema (in fact writing works with parameter = 1 and using direct SQL), but it looks like something different is used with the default value on Write Batch Size We don’t want to focus on the error message, it is obvious that it has been raised on the Oracle Side. But we need more information in order to understand what’s causing the issue. It looks like ADF is using two different ways to copy data depending on the value of the parameter. Any help would be greatly appreciated. Thanks in advance Alessandro759Views0likes2CommentsHow do I send Azure APIM product subscription approval to different email adresses
I am trying to identify if we have a Azure APIM instance shared between different teams then how can I send approval emails to different email addresses for different APIs/Products. I need to send approval emails for each product to the respective team's approver. How can this be achieved because by default APIM instance will send the approval to the APIM administrator's email address.5Views0likes0CommentsTraffic processing BGP Azure VPN gateway A/A
Hello, Can someone explain how Azure processes the traffic with implemented a VPN gateway in Active Active mode?. Azure firewall premium is also configured. BGP is without preferences. The user route definition is set up to the next hop Azure firewall . Is it possible in this scenario occurs the asymmetric routing with traffic drop by azure firewall ? In my understand is that, if we need to configure User route definition on Gateway subnet to inspect traffic to peering subnet, so the firewall don't see traffic passing through VPN gateway. Traffic going through ipsec tunnels can go different paths and firewall do not interfere because everything is routed to it by user route definition.14Views0likes1CommentAzure application insights workspace based migration related questions
Hello, We have migrated our classic application insights instances to workspace based successfully. After migration, I see that new data post migration is getting stored in linked log analytics workspace (which is expected) and also new data is getting stored at classic application insights as well. As per your documentation, after migration, old data will remain in classic application insight and new data will be stored in linked log analytics workspace. https://learn.microsoft.com/en-us/azure/azure-monitor/app/convert-classic-resource Questions Why new data is still getting stored in old classic app insights after migration? This is not mentioned in https://learn.microsoft.com/en-us/azure/azure-monitor/app/convert-classic-resource. Let us assume that it is getting stored to support the backward compatibility. How many days this is supported after migration? We have existing powerbi reports which are pulling data from classic application insights. After migration, let us suppose if I want some data from old app insights and some from new app insights, in this case, I have to write two separate queries and combine the results. Is my understanding correct?509Views0likes2CommentsAzure Synapse Analytics Pricing Meters - VCore
Hi, I am currently trying to understand the billing meters Service for Azure Synapse Analytics and here are my questions : - Does it exist any API or website that we can deep dive into each meters information? - Does anyone can explain to me what is the VCore meters for Azure Synapse Analytic Regard,450Views0likes1CommentSecondary mailboxes and FSLogix Roam Identity
HI, Got a bit of a puzzler, so we have a client who uses outlook to access the main mailbox on the tenant, but they also have a secondary mailbox added to outlook from a different tenant, so when they log in it authenticates to both, all good. The reason they have it set up this way is to do with signatures. With existing FSLogix this works fine, we then upgraded them to the latest, this changes the authentication method and puts the token on EntraID, the secondary mailbox now wants the password every time, as its on another tenant. Makes sense, so enabled Roam Identity to put back status quo. However this then pulls the machine out of EntraID/Intune, and recommendations is not to use Roam Identity if enrolled into Intune. Anyone else come across this or any way forward/guidance, have about 50+ users set up this way? Thanks813Views0likes2CommentsHelp! - How is VNet traffic reaching vWAN/on‑prem when the VNet isn’t connected to the vWAN hub
Hello, I needed some clarity on how the following is working: Attached is a network diagram of our current setup. The function apps (in VNet-1) initiate a connection(s) to a specific IP:Port or FQDN:Port in the on-premises network(s). A Private DNS zone ensures that any FQDN is resolved to the correct internal IP address of the on-prem endpoint. In our setup, both the function app and the external firewall reside in the same VNet. This firewall is described as “Unattached” because it is not the built-in firewall of a secured vWAN hub, but rather an independent Azure Firewall deployed in that VNet. The VNet has a user-defined default route (0.0.0.0/0) directing all outbound traffic to the firewall’s IP. The firewall then filters the traffic, allowing only traffic destined to whitelisted on-premises IP: Port or FQDN: Port combinations (using IP Groups), and blocking everything else. The critical question and the part that I am unable to figure out is: Once the firewall permits a packet, how does Azure know to route it to the vWAN hub and on to the site-to-site VPN? Because VNet-1 truly has no connection at all to the vWAN hub (no direct attachment, no peering, no VPN from the NVA). But the traffic is still reaching the on-prem sites. Unable to figure out how this is happening. Am I missing something obvious? Any help on this would be appreciated. Thank you!27Views0likes1CommentYour computer was unable to connect to the remote computer
I'm Having this AVD issue with a new workspace that was setup. It's a SessionDesktop application with a hostpool. The Web version of the client works fine, can connect and open RDP session but the Windows App will not work either on-prem or off-prem showing the error in the title when attempting to launch the session. I have tried playing with every setting I can find from RDP Properties, to Network ones, RDP shortpath, Entra SSO, Cred SSP, etc. Even if there was some sort of on-prem network issue it should still work when off-prem and it doesn't. But the web client works fine so I can't figure out what would cause this. The Application is just "SessionDesktop" and has no configurable parameters other than Display Name. The Host Pool has a private endpoint and when attempting to launch from the Windows App I can see some traffic going through our firewalls between the app and the PE as well as a few FQND's like windows365.microsoft.com, xxx.rdweb-g-us-r0-wvd.microsoft.com, xxx.afdfp-rdgateway-r0.wvd.microsoft.com etc... It's all 443 traffic though, no 3389 or 3390. Entra logs show successful auth to Windows App and Conditional Access Policy result is Success with Grant Controls Satisfied and Session Controls Enforced. I have the Windows App version 2.0.918.0 with Client version 1.2.6876.0 which should be the latest at the time of this writing. I tried the old deprecated RemoteDesktop app and it does the same thing. One other thing I tried was downloading the. rdpw file from the web client and adding a bunch of parameters to the RDP advanced config like gatewayusagemethod, gatewaybrokeringtype, wvd endpoint pool etc. as they don't seem to be in there by default but it had no effect. I suspect those properties should be dynamically added at runtime rather than baked in to the config. Any help would be appreciated. Thanks.164Views0likes3CommentsImproper AVD Host Decommissioning – A Practical Governance Framework
Hi everyone, After working with multiple production Azure Virtual Desktop environments, I noticed a recurring issue that rarely gets documented properly: Improper host decommissioning. Scaling out AVD is easy. Scaling down safely is where environments silently drift. Common issues I’ve seen in the field: Session hosts deleted before drain completion Orphaned Entra ID device objects Intune-managed device records left behind Stale registration tokens FSLogix containers remaining locked Defender onboarding objects not cleaned Host pool inconsistencies over time The problem is not technical complexity. It’s lifecycle governance. So I built a structured approach to host decommissioning focused on: Drain validation Active session verification Controlled removal from host pool VM deletion sequencing Identity cleanup validation Registration token rotation Logging and execution safety I’ve published a practical framework here: The framework is fully documented and includes validation logic and logging. https://github.com/modernendpoint/AVD-Host-Decommission-Framework The goal is simple: Not just removing a VM — but preserving platform integrity. I’m curious: How are you handling host lifecycle management in your AVD environments? Fully automated? Manual? Integrated with scaling plans? Identity cleanup included? Would love to hear how others approach this. Menahem Suissa AVD | Intune | Identity-Driven Architecture80Views0likes0CommentsSlow response times in different regions
I have a website which is primarily for people in Asia and uses Front Door. Microsoft say that content served through Front Door is hosted in POPs all over the world but Grafana checks show consistently bad performance in Asia. The London ping response times are consistently low from London but around 150ms from Singapore, frequently spiking to over 500ms. While London is closer to where the origin is hosted, I wouldn't expect pings to go to the origin but be handled by Front Door? Is there any way I can verify that the site is being propagated to regional POPs in the APAC area?Solved33Views0likes1CommentTeam Foundation Server 2018 Data is Null
Hi All, I am trying to configure Team Foundation Server 2018 with SQL server but i am getting an error at readiness check validation " TF255407: An error occurred while attempting to validate the configuration database. Error message: Data is Null. This method or property cannot be called on Null values." I already have TFS setup running on one of the server which is currently live and its working fine. I want to setup one more server to test the backup of TFS data, so for that i have installed same version of TFS i.e. 2018 and SQL server 2016. Below are all the activities which i have performed. Running TFS Instance Details. TFS Version: 2018 SQL Server: 2016 SP1 OS: Windows server 2019 TFS DB's: Tfs_Configuration: 1.5 GB Tfs_Projects: 48 GB Tfs_Products: 24 GB Tfs_AppDevelopment: 29 GB Tfs_Migration: 7 GB New Server Details. TFS Version: 2018 SQL Server: 2016 SP1 OS: Windows server 2016 All above db backups i have restored on new server. Installed TFS 2018 and when trying to configure i am selecting existing deployment. Regards Imran Shaikh1.1KViews0likes3CommentsUsing Claude Opus 4.6 in Github Copilot
The model selection in Github Copilot got richer with the addition of Claude Opus 4.6. The Model capability along with the addition of agents makes it a powerful combination to build complex code which requires many hours or days. Claude Opus 4.6 is better in coding skills as compared to the previous models. It also plans more carefully, performs more reliably in larger codebases, and has better code review as well as debugging skills to catch its own mistakes. In my current experiment, I used it multiple times to review its own code and while it took time (understandably) to get familiar with the code base. After that initial effort on the evaluation, the suggestions for fixes/improvements were on dot and often even better than a human reviewer (me in this case). Opus 4.6 also can run agentic tasks for longer. Following the release of the model, Anthropic published a paper on using Opus 4.6 to build C Compiler with a team of parallel Claudes. The compiler was built by 16 agents from scratch to get a Rust-based C compiler which was capable of compiling the Linux kernel. This is an interesting paper (shared in resources). Using Claude Opus 4.6 in Agentic Mode In less than an hour, I built a document analyzer to analyse the content, extract insights, build knowledge graphs and summarize elements. The code was built using Claude Opus 4.6 alongwith Claude Agents in Visual Studio Code. The initial prompt built the code and in the next hour after a few more interactions - unit tests were added and the UI worked as expected specifically for rendering the graphs. In the second phase, I converted the capabilities into Agents with tools and skills making the codebase Agentic. All this was done in Visual Studio using Github Copilot. Adding the complexity of Agentic execution was staggered across phases but the coding agent may well have built it right in the first instance with detailed specifications and instructions. The Agent could also fix UI requirements and problems in graph rendering from the snapshot shared in the chat window. That along with the logging was sufficient to quickly get to an application which worked as expected. The final graph rendering used mermaid diagrams in javascript while the backend was in python. Knowledge Graph rendering using mermaid What are Agents? Agents perform complete coding tasks end-to-end. They understand your project, make changes across multiple files, run commands, and adapt based on the results. An agent runs in the local, background, cloud, or third-party mode. An agent takes a high-level task and it breaks the task down into steps. It executes those steps with tools and self-corrects on errors. Multiple agent sessions can run in parallel, each focused on a different task. On creating a new agent session, the previous session remains active and can be accessed between tasks via the agent sessions list. The Chat window in Visual Studio Code allows for changing the model and also the Agent Mode. The Agent mode can be local for Local Agents or run in the background or on Cloud. Additionally, Third Party Agents are also available for coding. In the snapshot below, the Claude Agent (Third Party Agent) is used. In this project Azure GPT 4.1 was used in the code to perform the document analysis but this can be changed to any model of choice. I also used the ‘Ask before edits” mode to track the command runs. Alternatively, the other option was to let the Agent run autonomously. Visual Studio Code - Models and Agent Mode The local Agentic mode was also a good option and I used it a few times specifically as it is not constrained by network connectivity. But when the local compute does not suffice, the cloud mode is the next best option. Background agents are CLI-based agents, such as Copilot CLI running in the background on your local machine. They operate autonomously in the editor and Background agents use Git worktrees to work in an isolated environment from your main workspace to prevent conflicts with your active work. How to get the model? The model is accessible to GitHub Copilot Pro/Pro+, business, and enterprise users. Opus 4.6 operates more reliably in large codebases, offering improved code review and debugging skills. The Fast mode for Claude Opus 4.6, rolled out in research preview, provides a high-speed option with output token delivery speeds up to 2.5 times faster while maintaining comparable capabilities to Opus 4.6. Resources https://www.anthropic.com/news/claude-opus-4-6 https://www.anthropic.com/engineering/building-c-compiler https://github.blog/changelog/2026-02-05-claude-opus-4-6-is-now-generally-available-for-github-copilot https://code.visualstudio.com/docs/copilot/agents/overview887Views1like2CommentsAzure Advisor aggregate score for 2+ subscriptions - how is it calculated?
Dear all, I would like to understand how Azure Advisor calculates aggregations for the 5 pillars, for multiple subscriptions. In the example below we have values for Azure Advisor subscription 1 – (Cost = 68, Security = 47, Reliability = 86, Operational Excellence = 83, Performance = 100) And subsequently values for Azure Advisor subscription 2 - (Cost = 35, Security = 69, Reliability = 91, Operational Excellence = 79, Performance = 100) When selecting both subscriptions, we obtain the aggregate values – Naively I might have expected that the aggregate advisor scores could be the arithmetic average between the two, but that is not the case. Any help is much appreciated! ❤️ Thank you very much in advance, Best Regards, Eva367Views0likes1CommentDurable function service bus trigger in KEDA
I am trying to use KEDA for durable function using python. The trigger for the function is a service bus trigger. The following is the ScaledObject for the function. apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: durable-function labels: {} spec: scaleTargetRef: name: durable-function triggers: - type: azure-servicebus metadata: direction: in topicName: topic-name subscriptionName: durable-function-trigger connectionFromEnv: SERVICEBUS_CONNECTIONSTRING_ENV_NAME The error that I get is general as shown below fail: Function.df_starter[3] Executed 'Functions.df_starter' (Failed, Id=cd5ba757-9b30-4363-b018-fbe56bb26052, Duration=29ms) Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: Functions.df_starter ---> System.InvalidOperationException: Webhooks are not configured at Microsoft.Azure.WebJobs.Extensions.DurableTask.HttpApiHandler.GetWebhookUri() in D:\a\_work\1\s\src\WebJobs.Extensions.DurableTask\HttpApiHandler.cs:line 1244 at Microsoft.Azure.WebJobs.Extensions.DurableTask.HttpApiHandler.GetBaseUrl() in D:\a\_work\1\s\src\WebJobs.Extensions.DurableTask\HttpApiHandler.cs:line 1133 at Microsoft.Azure.WebJobs.Extensions.DurableTask.HttpApiHandler.GetInstanceCreationLinks() in D:\a\_work\1\s\src\WebJobs.Extensions.DurableTask\HttpApiHandler.cs:line 1150 at Microsoft.Azure.WebJobs.Extensions.DurableTask.BindingHelper.DurableOrchestrationClientToString(IDurableOrchestrationClient client, DurableClientAttribute attr) in D:\a\_work\1\s\src\WebJobs.Extensions.DurableTask\Bindings\BindingHelper.cs:line 48 at Microsoft.Azure.WebJobs.Host.Bindings.PatternMatcher.<>c__DisplayClass5_0`3.<New>b__0(Object src, Attribute attr, ValueBindingContext ctx) in C:\projects\azure-webjobs-sdk-rqm4t\src\Microsoft.Azure.WebJobs.Host\Bindings\PatternMatcher.cs:line 40 at Microsoft.Azure.WebJobs.Host.Bindings.BindToInputBindingProvider`2.ExactBinding.BuildAsync(TAttribute attrResolved, ValueBindingContext context) in C:\projects\azure-webjobs-sdk-rqm4t\src\Microsoft.Azure.WebJobs.Host\Bindings\BindingProviders\BindToInputBindingProvider.cs:line 221 at Microsoft.Azure.WebJobs.Host.Bindings.BindingBase`1.BindAsync(BindingContext context) in C:\projects\azure-webjobs-sdk-rqm4t\src\Microsoft.Azure.WebJobs.Host\Bindings\BindingBase.cs:line 50 at Microsoft.Azure.WebJobs.Binder.BindAsync[TValue](Attribute[] attributes, CancellationToken cancellationToken) in C:\projects\azure-webjobs-sdk-rqm4t\src\Microsoft.Azure.WebJobs.Host\Bindings\Runtime\Binder.cs:line 117 at Microsoft.Azure.WebJobs.Script.Binding.FunctionBinding.BindStringAsync(BindingContext context) in /src/azure-functions-host/src/WebJobs.Script/Binding/FunctionBinding.cs:line 220 at Microsoft.Azure.WebJobs.Script.Binding.ExtensionBinding.BindAsync(BindingContext context) in /src/azure-functions-host/src/WebJobs.Script/Binding/ExtensionBinding.cs:line 112 at Microsoft.Azure.WebJobs.Script.Description.WorkerFunctionInvoker.<>c__DisplayClass12_0.<<BindInputsAsync>b__1>d.MoveNext() in /src/azure-functions-host/src/WebJobs.Script/Description/Workers/WorkerFunctionInvoker.cs:line 142 --- End of stack trace from previous location --- at Microsoft.Azure.WebJobs.Script.Description.WorkerFunctionInvoker.BindInputsAsync(Binder binder) in /src/azure-functions-host/src/WebJobs.Script/Description/Workers/WorkerFunctionInvoker.cs:line 146 at Microsoft.Azure.WebJobs.Script.Description.WorkerFunctionInvoker.InvokeCore(Object[] parameters, FunctionInvocationContext context) in /src/azure-functions-host/src/WebJobs.Script/Description/Workers/WorkerFunctionInvoker.cs:line 74 at Microsoft.Azure.WebJobs.Script.Description.FunctionInvokerBase.Invoke(Object[] parameters) in /src/azure-functions-host/src/WebJobs.Script/Description/FunctionInvokerBase.cs:line 82 at Microsoft.Azure.WebJobs.Host.Executors.VoidTaskMethodInvoker`2.InvokeAsync(TReflected instance, Object[] arguments) in C:\projects\azure-webjobs-sdk-rqm4t\src\Microsoft.Azure.WebJobs.Host\Executors\VoidTaskMethodInvoker.cs:line 20 at Microsoft.Azure.WebJobs.Host.Executors.FunctionInvoker`2.InvokeAsync(Object instance, Object[] arguments) in C:\projects\azure-webjobs-sdk-rqm4t\src\Microsoft.Azure.WebJobs.Host\Executors\FunctionInvoker.cs:line 52 at Microsoft.Azure.WebJobs.Host.Executors.FunctionExecutor.InvokeWithTimeoutAsync(IFunctionInvoker invoker, ParameterHelper parameterHelper, CancellationTokenSource timeoutTokenSource, CancellationTokenSource functionCancellationTokenSource, Boolean throwOnTimeout, TimeSpan timerInterval, IFunctionInstance instance) in C:\projects\azure-webjobs-sdk-rqm4t\src\Microsoft.Azure.WebJobs.Host\Executors\FunctionExecutor.cs:line 581 at Microsoft.Azure.WebJobs.Host.Executors.FunctionExecutor.ExecuteWithWatchersAsync(IFunctionInstanceEx instance, ParameterHelper parameterHelper, ILogger logger, CancellationTokenSource functionCancellationTokenSource) in C:\projects\azure-webjobs-sdk-rqm4t\src\Microsoft.Azure.WebJobs.Host\Executors\FunctionExecutor.cs:line 527 at Microsoft.Azure.WebJobs.Host.Executors.FunctionExecutor.ExecuteWithLoggingAsync(IFunctionInstanceEx instance, FunctionStartedMessage message, FunctionInstanceLogEntry instanceLogEntry, ParameterHelper parameterHelper, ILogger logger, CancellationToken cancellationToken) in C:\projects\azure-webjobs-sdk-rqm4t\src\Microsoft.Azure.WebJobs.Host\Executors\FunctionExecutor.cs:line 306 --- End of inner exception stack trace --- at Microsoft.Azure.WebJobs.Host.Executors.FunctionExecutor.ExecuteWithLoggingAsync(IFunctionInstanceEx instance, FunctionStartedMessage message, FunctionInstanceLogEntry instanceLogEntry, ParameterHelper parameterHelper, ILogger logger, CancellationToken cancellationToken) in C:\projects\azure-webjobs-sdk-rqm4t\src\Microsoft.Azure.WebJobs.Host\Executors\FunctionExecutor.cs:line 352 at Microsoft.Azure.WebJobs.Host.Executors.FunctionExecutor.TryExecuteAsync(IFunctionInstance functionInstance, CancellationToken cancellationToken) in C:\projects\azure-webjobs-sdk-rqm4t\src\Microsoft.Azure.WebJobs.Host\Executors\FunctionExecutor.cs:line 108 I can run the durable function locally using VS Code and works perfectly. Though when I deploy it to local Kubernetes cluster using KEDA, I get a failure. Bear in mind that I also managed to run a normal function app in KEDA and worked perfectly. I am assuming that the problem is to do with durable function framework/package is not recognised by KEDA. Can you please advice on this issue?1.2KViews0likes1CommentMissing equivalent for Python MemorySearchTool and AgentMemorySettings in C# SDK
Hi Team, I am currently working with the Azure AI Foundry Agent Service (preview). I’ve been reviewing the documentation for managed long-term memory, specifically the "Automatic User Memory" features demonstrated in the Python SDK here: https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/memory-usage?view=foundry&tabs=python. In Python, it is very straightforward to attach a MemorySearchTool to an agent and use AgentMemorySettings(scope="user_123") during a run. This allows the service to automatically extract, consolidate, and retrieve memories without manual intervention. However, in the https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/ai/Azure.AI.Projects#memory-store-operations, I only see the low-level MemoryStoreClient which appears to require manual CRUD operations on memory items. My Questions: Is there an equivalent high-level AgentMemorySearchTool or similar abstraction in the current C# NuGet package (Azure.AI.Projects) that handles automatic extraction and retrieval? If not currently available, is this feature on the immediate roadmap for the .NET SDK? Are there any samples showing how to achieve "automatic" memory (where the Agent extracts facts itself) using the C# SDK without having to build a custom orchestration layer or call REST APIs directly? Any guidance on the timeline for feature parity between the Python and .NET SDKs regarding Agent Memory would be greatly appreciated. SDK Version: Azure.AI.Projects 1.2.0-beta.538Views0likes1CommentmacOS: SSO no longer fully functional on AVD (Win11 25H2)
Hello everyone, Since updating our Test Azure Virtual Desktop Session Hosts from Windows 11 23h2 to 25H2 (26200.7462) , we've been experiencing an SSO issue that exclusively affects macOS clients. Symptoms For macOS users (Windows App), the following issues occur: Example Teams Teams shows the user as "Unknown User" Chat and collaboration features fail to load Error message: "You need to sign in again. This may be a requirement from your IT department or Teams, or the result of a password update. - Sign in" After clicking "Sign in," only a window appears with "Continue with sign-in" (no PW/MFA prompt) After this, all other applications work without further authentication Technical Details macOS Device: AppleM4 Pro macOS Tahoe 26.2 Installed WindowsApp version: 11.3.2 (2848) dsregcmd /status: No errors detected PRT is active and was updated for sign-in Entra Sign-In Logs: Error code: 9002341 EventLog on Session Host (AAD-Operational): Event ID: 1098 Error: 0xCAA2000C The request requires user interaction. Code: interaction_required Description: AADSTS9002341: User is required to permit SSO. Event ID: 1097 Error: 0xCAA90056 Renew token by the primary refresh token failed. Logged at RefreshTokenRequest.cpp, line: 148, method: RefreshTokenRequest::AcquireToken. Observations Affects: Both managed (internal) and unmanaged (external) macOS devices Does NOT affect: Windows clients connecting via Windows App Interesting: If a macOS user starts the session (with the error) and then reconnects on a Windows device, authentication works automatically there Workaround The issue can be resolved for macOS clients by removing the "DE" flag from "Automatic app sign-in" in the following file: C:\Windows\System32\IntegratedServicesRegionPolicySet.json Questions Is this a known issue? Has anyone experienced similar issues with macOS clients after the 25H2 update? Why does this issue only occur with macOS clients? Why does SSO only work after removing the "DE" flag for macOS devices, and why are Windows devices not affected? I would appreciate any insights or confirmation of this issue! Thank you and greetings FT_186Views0likes1CommentSingle-Sign On
After troubleshooting an issue for a customer, we determined that the prerequisites for enabling SSO at the AVD host pool level is not strictly enforced when a user goes to execute the SSO workflow from MSRDC or the Windows App. Meaning, that if an administrator does not enable the -IsRemoteDesktopEnabled flag on the Service Principals "Microsoft Remote Desktop" and "Windows Cloud Login" respectively. Setup: Deploy Entra ID Joined session hosts to a host pool and enable the "Microsoft Entra single sign-on" RDP property to "Connections will use Microsoft Entra authentication to provide single sign-on" or update the RDP connection string with 'enablerdsaadauth:i:1'. Result: User will not receive the 'Windows Security' dialog box to access the session host with their Entra ID credentials. Caveat: Be aware that to sign in with Entra ID credentials, minimally, the host pool RDP settings must contain 'targetisaddjoined:i:1'. Microsoft states this is going away and blending into 'enablerdsaadauth:i:1', which also enables SSO. It seems a bit odd of a move in my opinion and having two separate RDP properties makes sense if a company does not want SSO. But it is in alignment with Microsoft's push for passwordless authentication. For the Microsoft AVD team, why does this behavior exist and is it on the roadmap to be fixed if it's a known gap?436Views1like4CommentsAdmin‑On‑Behalf‑Of issue when purchasing subscription
Hello everyone! I want to reach out to you on the internet and ask if anyone has the same issue as we do when creating PAYG Azure subscriptions in a customer's tenant, in which we have delegated access via GDAP through PartnerCenter. It is a bit AI formatted question. When an Azure NCE subscription is created for a customer via an Indirect Provider portal, the CSP Admin Agent (foreign principal) is not automatically assigned Owner on the subscription. As a result: AOBO (Admin‑On‑Behalf‑Of) does not activate The subscription is invisible to the partner when accessing Azure via Partner Center service links The partner cannot manage and deploy to a subscription they just provided This breaks the expected delegated administration flow. Expected Behavior For CSP‑created Azure subscriptions: The CSP Admin Agent group should automatically receive Owner (or equivalent) on the subscription AOBO should work immediately, without customer involvement The partner should be able to see the subscription in Azure Portal and deploy resources Actual Behavior Observed For Azure NCE subscriptions created via an Indirect Provider: No RBAC assignment is created for the foreign AdminAgent group The subscription is visible only to users inside the customer tenant Partner Center role (Admin Agent foreign group) is present, but without Azure RBAC. Required Customer Workaround For each new Azure NCE subscription, the customer must: Sign in as Global Admin Use “Elevate access to manage all Azure subscriptions and management groups” Assign themselves Owner on the subscription Manually assign Owner to the partner’s foreign AdminAgent group Only after this does AOBO start working. Example Partner tries to access the subscription: https://portal.azure.com/#@customer.onmicrosoft.com/resource/subscriptions/<subscription-id>/overview But there is no subscription visible "None of the entries matched the given filter" https://learn.microsoft.com/en-us/azure/role-based-access-control/elevate-access-global-admin?tabs=azure-portal%2Centra-audit-logs#step-1-elevate-access-for-a-global-administrator from the customer's global admin. and manual RBAC fix in Cloud console: az role assignment create \ --assignee-object-id "<AdminAgent-Foreign-Group-ObjectId>" \ --role "Owner" \ --scope "/subscriptions/<subscription-id>" \ --assignee-principal-type "ForeignGroup" After this, AOBO works as expected for delegated administrators (foreign user accounts). Why This Is a Problem Partners sell Azure subscriptions that they cannot access Forces resources from customers to involvement from customers Breaks delegated administration principles For Indirect CSPs managing many tenants, this is a decent operational blocker. Key Question to Microsoft / Community Does anyone else struggle with this? Is this behavior by design for Azure NCE + Indirect CSP? Am I missing some point of view on why not to do it in the suggested way?39Views0likes0Comments
Events
in 1 day
Explore practical AI use cases available through Microsoft Marketplace—from prebuilt AI apps and agents to AI‑powered solutions that simplify buying and deployment. As organizations look to move f...
Wednesday, Feb 25, 2026, 10:00 AM PSTOnline
0likes
6Attendees
0Comments
Recent Blogs
- Today, we are thrilled to announce the release of MCP App support in the Azure Functions MCP (Model Context Protocol) extension! You can now build MCP Apps using the Functions MCP Extension in Pyth...Feb 23, 202617Views0likes0Comments
- Executive Summary Telecommunications providers are under unprecedented pressure to reignite revenue growth amid market saturation, commoditization of core services, rising infrastructure costs, and...Feb 23, 202616Views0likes0Comments