Recent Discussions
Azure Training & Certification
Get the tools you need to advance your career with Microsoft Azure Expand your career opportunities in the cloud with three offers that combine training and industry-recognized Microsoft Certified Professional (MCP) certifications. Choose the tools that can help you to succeed—from free on-demand learningto Microsoft Azure certification to MCP exam prep and retake opportunities. Prove to the world and show employers that you’re ready to take advantage of the growing collection of integrated cloud services in Azure, as you gain the skills to develop solutions that can lower total cost and help improve scalability, security, and privacy. Choose the Azure certification training offer that best meets your needs: Free training Learn and build Azure skills Get full access to Azure massive open online courses (MOOCs) in the Microsoft training on Open edX catalog. Earn a free certificate of completion for each completed MOOC. Learn more at: https://aka.ms/azurefreetraining18KViews41likes15CommentsAzure Course Blueprints
Overview The Course Blueprint is a comprehensive visual guide to the Azure ecosystem, integrating all the resources, tools, structures, and connections covered in the course into one inclusive diagram. It enables students to map out and understand the elements they've studied, providing a clear picture of their place within the larger Azure ecosystem. It serves as a 1:1 representation of all the topics officially covered in the instructor-led training. Links: Each icon in the blueprint has a hyperlink to the pertinent document in the learning path on Learn. Layers: You have the capability to filter layers to concentrate on segments of the course by modules. I.E.: Just day 1 of AZ-104, using filters in Visio and selecting modules 1-3 Enhanced Integration: The Visio Template+ for expert courses such as SC-100 and AZ-305 now features an additional layer that allows you to compare SC-100, AZ-500, and SC-300 within the same diagram. Similarly, you can compare AZ-305, AZ-204, and AZ-104 to identify differences and study gaps. Since SC-300 and AZ-500 are potential prerequisites for SC-100, and AZ-204 or AZ-104 for AZ-305, this comparison is particularly useful for understanding the extra knowledge or skills required to advance to the next level. Advantages for Students Defined Goals: The blueprint presents learners with a clear vision of what they are expected to master and achieve by the course’s end. Focused Learning: By spotlighting the course content and learning targets, it steers learners’ efforts towards essential areas, leading to more productive learning. Progress Tracking: The blueprint allows learners to track their advancement and assess their command of the course material. New Feature:A comprehensive list of topics for each slide deck is now available in a downloadable .xlsx file. Each entry includes a link to Learn and its dependencies. Download links Associate Level PDF Visio Released Updated Contents! AZ-104 Azure Administrator Associate Blueprint [PDF] Template 12/14/2023 10/28/2024 Contents AZ-204 Azure Developer Associate Blueprint [PDF] Template 11/05/2024 11/11/2024 Contents AZ-500 Azure Security Engineer Associate Blueprint [PDF] Template+ 01/09/2024 10/10/2024 Contents AZ-700 Azure Network Engineer Associate Blueprint [PDF] Template 01/25/2024 11/04/2024 Contents SC-300 Identity and Access Administrator Associate Blueprint [PDF] Template 10/10/2024 Contents Specialty PDF Visio Released Updated AZ-140 Azure Virtual Desktop Specialty Blueprint [PDF] Template 01/03/2024 02/05/2024 Expert level PDF Visio Released Updated AZ-305 Designing Microsoft Azure Infrastructure Solutions Blueprint [PDF] Template+ AZ-104 AZ-204 AZ-700 05/07/2024 11/28/2024 Contents SC-100 Microsoft Cybersecurity Architect Blueprint [PDF] Template+ AZ-500 SC-300 10/10/2024 Contents Skill based Credentialing PDF Visio Released Updated AZ-1002 Configure secure access to your workloads using Azure virtual networking Blueprint [PDF] Template 05/27/2024 Contents AZ-1003 Secure storage for Azure Files and Azure Blob Storage Blueprint [PDF] Template 02/07/2024 02/05/2024 Contents Benefits for Trainers: Trainers can follow this plan to design a tailored diagram for their course, filled with notes. They can construct this comprehensive diagram during class on a whiteboard and continuously add to it in each session. This evolving visual aid can be shared with students to enhance their grasp of the subject matter. Introduction to Course Blueprint for Trainers [10 minutes + comments] Real life demo AZ-104 Advanced Networking section [3 minutes] Visio stencilsAzure icons - Azure Architecture Center | Microsoft Learn Subscribe if you want to get notified of any update like new releases or updates. My emaililan.nyska@microsoft.com LinkedInhttps://www.linkedin.com/in/ilan-nyska/ Celebrating 30,000 Downloads! Please consider sharing your anonymous feedback <-- [~ 40 seconds to complete]Solved57KViews24likes24CommentsMicrosoft Learn
Microsoft Learnis an interactive, quick, and fun way to learn Azure! With Microsoft Learn, you can master new Azure skills with step-by-step interactive tutorials including videos and hands-on learning. Learning on your time:Tutorials and modules aligned to role-based certifications to fit your schedule. Learn by doing:Interactive, in-browser coding environments provide hands-on experience. Get recognized with achievements:Complete modules, test yourknowledge,and earn and share achievements to recognize your Azure skills. Learn the way you want:Choose from free self-paced tutorials and hands-on learning, free structured online courses from Pluralsight, and instructor-led classes from learning partners. The new learning content is aligned to the new role-based certifications. Let us know on the comments below what you think and what courses you`re taking, and visit Microsoft Learn.Solved2.7KViews24likes16CommentsAnnouncing an Azure Communication Services AMA on April 21!
We are very excited to announce a Microsoft Azure 'Ask Microsoft Anything' (AMA)for Azure Communication Services! The AMA will take place on Wednesday, April 21st, 2021 from 9:00 a.m. to 10:00 a.m. PT in theMicrosoft Azure AMA space.Add the event to your calendar and view in your time zonehere. An AMA is a live online event similar to a “YamJam” on Yammer or an “Ask Me Anything” on Reddit. This AMA gives you the opportunity to connect with members of the product engineering team who will be on hand to answer your questions and listen to feedback.7KViews23likes5CommentsStaying up to date with the Microsoft Azure roadmap
On September 5, 2018, we launched a new and improved experience on both Azure updates and Products available by region on Azure.com. Azure updates Azure updateson Azure.com has been consolidated with the Azure roadmap to provide an all-inclusive experience for users to learn about important Azure product updates and roadmap in one place. This revamped page allows you to: Filter by Status - All, Now available, In preview, or In development Product search -Browse the dropdown or type in a product in the text field Update type - Compliance, Features, Pricing and Offerings, Regions and Datacenters, Event-related announcements, etc. Subscribe to RSS Feed to keep track of new updates based on the filters you've chosen Provide feedback via UserVoice Products available by region Products available by region on Azure.com is the roadmap source-of-truth on what services are available and coming in specific regions. This site is directly integrated with Azure engineering roadmap tools and now displays: What Azure services are currently available or in preview within a particular region Future estimated release dates for previews or general availability in a particular region You can now hover over Preview and Future availability icons highlighted with a circle to learn when to expect a service to go into Preview or Generally Available A dynamic, customizable view that allows you to multi-select: Products of interest Regions/geographies of interest - including sovereign clouds like Azure Government Let us know what you think and what you'd like to see in the comments below.24KViews22likes10CommentsAzure Essentials - Free Training
At Ignite 2017, we launched the new Microsoft Azure Essentials, the best place to get started with and learn more about Azure. Don`t know what is Azure, or want to learn more about Azure and Cloud? Just choose a topic and use the curated set of demo videos, hands-on labs, and product trials to learn about and try Azure at your own pace. Be sure to also check out the Azure learning paths, and Azure certification. You can access all this content for free at Azure.com/Essentials20KViews20likes12CommentsAzure Essentials just got an upgrade - Free learning resources
Forget Moore’s Law. It seems like the pace of migration to the cloud is doubling every month now. We see the shift to the cloud and love to hear about how it’s helping your organization and your career. Azure Essentialsis meeting your need to expand your skills and is the single best resource to learn Azure, get training and have access to practical and free learning resources. Now we’ve made it even easier for you get exactly what you need with these upgrades: New Azure Essentials topics have been added. Watch the short video, do the Hands-on Labs and practice what you learned in a live environment with these new topics: Managing VM’s and Resources Data Visualization and Modeling Data Analytics The new Progress Tracker (requires log in) gives you a quick view of what you’ve already completed and allows you to add items to your queue for later. You will also see what is new since your last visit. No more need to log in to access most of the learning resources so accessing the content is even easier. Whether you’re picking up where you left off or starting your Azure education from scratch, Azure Essentials is more capable and accessible than ever. Take a look. Whether you’re picking up where you left off or starting your Azure education from scratch, Azure Essentials is more capable and accessible than ever. Take a look.7.7KViews19likes13CommentsAnnouncing an Azure Data and Analytics AMA on October 13!
We are very excited to announce an Azure Data and Analytics AMA on October 13! This is the first in a series of AMAs around Azure, all held here in the Tech Community in this discussion space, coinciding with the Microsoft Azure Hack for Social Justice event Upcoming dates/events are below: October 20 - Azure AI & Cognitive Services October 29 - Azure Serverless & Azure Function November 5 - App Services & Standard Web Apps The AMA will take place on Tuesday, October 13, 2020 from 9:00 a.m. to 10:00 a.m. PT in theAzure AMA space.Add the event to your calendar and view in your time zonehere. An AMA is a live online event similar to a “YamJam” on Yammer or an “Ask Me Anything” on Reddit. This AMA gives you the opportunity to connect with cloud solution architects who will be on hand to answer your questions and listen to feedback.5KViews17likes4CommentsWindows Virtual Desktop learning and readiness resources
Would you like to learn more about Windows Virtual Desktop? Consider watching these Ignite sessions: Scott Manchester's Mechanics Live (20 minutes) Windows Virtual Desktop Overview (43 minutes) Windows Virtual Desktop Deep Dive (56 minutes) A tour of Microsoft Windows Virtual Desktop (20 minutes) Office in Virtual Desktop environments (53 minutes) New multi-session virtualization capabilities in Windows (32 minutes) Register via http://aka.ms/wvdpreview to be notified for the public preview which will launch later this year.7KViews16likes1CommentDeallocate VM on user logoff
Recently we've announced the public preview of start VM on connectwhich will allow your deallocated virtual machines getting started automatically when the assigned user tries to connect. Let us have a look how we can further optimize our cost by deallocating the VM when it is not used anymore. Let's first have a high level view on the needed steps: Create a custom role to deallocate a virtual machine Create a managed identity for your virtual machines Allow each virtual machine to deallocate itself via a role assignment implement logoff script and policies around idle/disconnected sessions So let's start by creating our custom role: Open the Azure portal, go toSubscriptionsand select the appropriate subscription Go toAccess control (IAM)and selectAdd a custom role. Next, name the custom role and add a description. In this example I'll call it"Deallocate VM on logoff" On thePermissionstab, add the following permission to the subscription you're assigning the role to: Microsoft.Compute/virtualMachines/deallocate/action When you're finished, selectOk. If you prefer a JSON definition, please use the following template: { "properties": { "roleName": "Deallocate VM on logoff", "description": "This custom role will allow your virtual machines to be deallocated when the user logs off.", "assignableScopes": [ "/subscriptions/<<<SubscriptionID>>>" ], "permissions": [ { "actions": [ "Microsoft.Compute/virtualMachines/deallocate/action" ], "notActions": [], "dataActions": [], "notDataActions": [] } ] } } After we've created our custom role, we'll need to create a managed identity for our virtual machines. By this managed identities we don't need to store any credentials locally on the virtual machine or in an Azure KeyVault and can assign each virtual machine granular permission to shutdown only itself. As this can be a bigger task, depending on the number of virtual machines you have in your personal host pools, I've prepared a script that will utilize the Azure PowerShell modules to assign that fine-grained permissions, so you may need to install those modules first: Install-Module -Name Az.Account,Az.Compute,Az.DesktopVirtualization,Az.Resources The script itself takes the host pool name, associated resource group and the role definition name selected above as parameters. It will then iterate through all virtual machines assigned to the specified host pool, create a managed identity when not already present and create a role assignment limited to the virtual machine itself: $hostPoolName = "<<<HostPoolName>>>" $resourceGroupName = "<<<ResourceGroupName>>>" $roleDefinitionName = "<<<RoleDefinitionName>>>" Connect-AzAccount $sessionHosts = Get-AzWvdSessionHost -HostPoolName $hostPoolName -ResourceGroupName $resourceGroupName foreach ($sessionHost in $sessionHosts) { <# get virtual machine by session host reference #> $resource = Get-AzResource -ResourceId $sessionHost.ResourceId $vm = Get-AzVM -ResourceGroupName $resource.ResourceGroupName -Name $resource.Name <# create system-assigned managed identiy unless it already exists #> $managedIdentity = ($vm.Identity | where Type -eq "SystemAssigned").PrincipalId if ($managedIdentity -eq $Null) { Update-AzVM -ResourceGroupName $vm.ResourceGroupName -VM $vm -IdentityType SystemAssigned $managedIdentity = ((Get-AzVM -ResourceGroupName $vm.ResourceGroupName -VMName $vm.Name).Identity | where Type -eq "SystemAssigned").PrincipalId } <# create role-assignment unless it already exists #> if ((Get-AzRoleAssignment -RoleDefinitionName $roleDefinitionName -ObjectId $managedIdentity) -eq $Null) { New-AzRoleAssignment -ObjectId $managedIdentity -RoleDefinitionName $roleDefinitionName -Scope $vm.Id } } Next we'll configure our session host to disconnect idle sessions and logoff disconnected sessions after a certain period of time: Connect remotely to the VM that you want to set the policy for. Open theGroup Policy Editor, then go toLocal Computer Policy>Computer Configuration>Administrative Templates>Windows Components>Remote Desktop Services>Remote Desktop Session Host>Session Time Limits. Find the policy that saysSet time limit for disconnected sessions, then change its value toEnabled. After you've enabled the policy, selectyour preferred time limit atEnd a disconnected session. Find the policy that says Set time limit for active but idle Remote Desktop Services sessions, then change its value to Enabled. After you've enabled the policy, select your preferred time limit at Idle session limit. The above settings also ensure that an user will get a warning message two minutes before reaching the specified time limit so he can press a key or move the mouse to prevent getting disconnected. In the last step we'll create the PowerShell script initiating the deallocation and configure it as a logoff script. The PowerShell script will query the details of your virtual machine using the Azure instance metadata, connect to Azure using the created managed identity and initiate the actual deallocation via REST API: $metadata = Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -Proxy $Null -Uri "http://169.254.169.254/metadata/instance?api-version=2021-01-01" $authorizationToken = Invoke-RestMethod -Headers @{"Metadata"="true"} -Method Get -Proxy $Null -Uri "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2021-01-01&resource=https://management.azure.com/" $subscriptionId = $metadata.compute.subscriptionId $resourceGroupName = $metadata.compute.resourceGroupName $vmName = $metadata.compute.name $accessToken = $authorizationToken.access_token $RestartEvents = Get-EventLog -LogName System -After (Get-Date).AddMinutes(-1) |? {($_.EventID -eq 1074) -and ($_.Message -match "restart" )} $SessionCount = (query user | Measure-Object | select Count).count - 1 # remove headline if (($SessionCount -gt 1) -or ($RestartEvents.count -ge 1)) { # skip deallocate because of user-sessions or initiated reboot } else { Invoke-WebRequest -UseBasicParsing -Headers @{ Authorization ="Bearer $accessToken"} -Method POST -Proxy $Null -Uri https://management.azure.com/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Compute/virtualMachines/$vmName/deallocate?api-version=2021-03-01 -ContentType "application/json" } To have that script executed upon logoff, we need to configure it: Connect remotely to the VM that you want to set the policy for. Open theGroup Policy Editor, then go toLocal Computer Policy>User Configuration>Windows Settings>Scripts (Logon/Logoff). Find the item that saysLogoff. Specify the script you've created on thePowerShell Scripts tab. When rolling out to bigger host-pools, ideally place the logoff script on a centrally accessible location, e.g. SYSVOL-share. Hope this short tutorial will help you take full benefit of the start VM on connect feature. Happy to read your feedback and comments below.Solved45KViews14likes22CommentsHow can I make parent from child mandatory?
I need to set the obligation to insert a parent when creating a bug in my custom process inherited from agile. Is there some system?Existing extensions to install maybe? Or do I have to develop one ad hoc to have this functionality?3.1KViews12likes1CommentWelcome to the new Azure Log Analytics community!
Azure Log Analytics has been enhanced substantially and now offers an improved search and analytics experience. This includes interactive query language and an advanced analytics portal, both powered by a highly scalable and powerful data store. The query language is super rich, offering flexible search functions as well as advanced machine learning constructs. To support these new capabilities and provide you with the best querying experience, the advanced analytics portal supports multi-line editing of queries, many visualizations, and advanced diagnostics. To ramp-up quickly, we suggest you review the documentation, where you can find: Getting started tutorials Useful cheat sheets Plenty of examples The complete language reference Test drive the query language in the free demo environment and get started now! Have a question, comment, or request? Post it right here. The Azure Log Analytics TeamSolved4.2KViews11likes4CommentsInsider Preview: Single sign-on and passwordless authentication for Azure Virtual Desktop
Today we’re announcing the Insider preview for enabling an Azure AD-based single sign-on experience and support for passwordless authentication, using Windows Hello and security devices (like FIDO2 keys). With this preview, you can now: Enable a single sign-on experience to Azure AD-joined and Hybrid Azure AD-joined session hosts Use passwordless authentication to sign in to the host using Azure AD Use passwordless authentication inside the session Use third-party Identity Providers (IdP) that integrate with Azure AD to sign in to the host Getting started This new functionality is currently available in Insider builds of Windows 11 22H2, available in the Azure Gallery when deploying new session hosts in a host pool. Want a quick overview of the new functionality? Watch this intro video on Azure Academy! To get started with single sign-on, follow the instructions to Configure single sign-on which will guide you in enabling the new authentication protocol. To start using Windows Hello and FIDO2 keys inside the session, follow the instructions for In-session passwordless authentication to use the new WebAuthn redirection functionality. Learn more about the supported authentication methods supported by Azure Virtual Desktop, including single sign-on on our Identities and authentication page. Stay tuned for news about the upcoming public preview which will add support for Windows 10 and current Windows 11 hosts.29KViews11likes23CommentsAnnouncing an Azure Migrate AMA on January 19!
We are very excited to announce an Azure Migrate AMA! We'll be answering your questions on how to migrate your datacenter using Azure Migrate. The AMA will take place on Tuesday, January 19, 2021 from 9:00 a.m. to 10:00 a.m. PT in theAzure AMA space.Add the event to your calendar and view in your time zonehere. An AMA is a live online event similar to a “YamJam” on Yammer or an “Ask Me Anything” on Reddit. This AMA gives you the opportunity to connect with Microsoft product experts who will be on hand to answer your questions and listen to feedback.5.9KViews11likes6Comments(Azure) Virtual Desktop Optimization Tool now available
Optimizing images has always been an important component of preparing images as part of a traditional Remote Desktop Services (RDS) infrastructure or virtual desktop infrastructure (VDI). Optimizing session hosts, in particular, can increase user density and eventually lower costs. With the Virtual Desktop Optimization Tool, you can optimize your Windows 10, version 2004 multi- and single-session deployments in Windows Virtual Desktop. Note: The information in this post is community-driven; nothing has yet been officially launched by the Windows Virtual Desktop product team. Credit goes to Robert M. Smith and Tim Muessig from Microsoft, previously known as the VDIGuys, for creating this tool and make it available for free for the community. Windows 10 multi-session image name change As noted in recent announcements, Office 365 ProPlus is now Microsoft 365 apps for Enterprise. With this name change, we have updated the Windows Virtual Desktop image names in Azure Marketplace. As a result, when you are looking for an image in the Azure Marketplace image gallery, you should begin by selecting Windows 10 Enterprise multi-session, version 2004 + Microsoft 365 Apps – Gen1 as your baseline image. How the Virtual Desktop Optimization Tool works The (Windows) Virtual Desktop Optimization Tool disables services in the operating system that you most likely won’t need for your Windows Virtual Desktop session host. To make sure that your line-of-business (LOB) applications continue running as they should, there are some preliminary steps that should first performed. Note: There are settings default disabled when you run the scrip out of the box such as AppX Packages for the Windows Calculator. We strongly suggest analyzing the tool via the JSON files that include the default settings. This also gives you the opportunity to enable them before running the tool so they remain untouched. I'll explain more about this later on in the article. The full list of enhancements for native Windows services will be available soon. Bookmark Run and tune your Remote Desktop Services environment for the latest updates. Expected performance gains Windows Virtual Desktop value-added services provider and Microsoft partner LoginVSI performed early tests with the Virtual Desktop Optimization Too and gained over 100 users in their internal benchmarking lab environment with a Windows 10, version 2004 single session. We, therefore, assume that this gain will also be possible with Windows 10 Enterprise multi-session. VSImax asserts a maximum number of users that are able to log on to the virtual desktop hosts pool as part of the underlying infrastructure. That number is the "sweet spot" as going over that number will decrease performance for all users. (Thanks to LoginVSI for sharing these results with us.) Note: We recommend you use simulation tools to test your deployment using both stress tests and real-life usage simulations to ensure that your system is responsive and resilient enough to meet user needs Remember to vary the load size to avoid surprises. Desktops in the Cloud on Performance Optimizations for Windows Virtual Desktop with Robert and Tim (aka VDI Guys) We recently had the creators of the Virtual Desktop Optimization tool as guests on ourDesktops in the Cloudvideo-podcast. Robert and Tim explained everything you should know, as well as best practices and lessons learned. A must watch in extension to this article.Watch it below. How to use the Virtual Desktop Optimization Tool The Virtual Desktop Optimization Tool makes it possible to disable uncommon services for virtual desktop environments, such as Windows Virtual Desktop. Note: We recommend that you run the script after the Sysprep (System Preparation) process, most likely as startup script w with a large set of virtual machines. This is due to the AppX Packages that conflict and most likely the sysprep will fail. Download all scripts from the Virtual-Desktop-Optimization-Tool GitHub repository. Select Clone or download, followed by Download ZIP. Unzip the folder to your Windows Virtual Desktop session host(s) to a specified folder (e.g. C:\Optimize or C:\Temp). Note: You could also run the scripts as part of your image management procedure e.g. Azure image Builder (AIB) or Azure DevOps. Important information before running the tool There are settings default disabled when you run the scrip out of the box such as AppX Packages for the Windows Calculator. We strongly suggest analyzing the tool via the JSON files that include the default settings. This also gives you the opportunity to enable them before running the tool so they remain untouched. You can find the JSON file in the Windows built number folder, under ConfigurationFiles - e.g. C:\Optimize\2004\ConfigurationFiles. You've to put the settings to Enabled - that you want to keep as default. Below is the example file for AppX Packages, there are JSON files for Services and scheduled tasks as well. Another option is to remove the while entry out of the JSON file. AppxPackages.json - Example Windows Calculator App { "AppxPackage": "Microsoft.WindowsCalculator", "VDIState": "Enabled", "URL": "https://www.microsoft.com/en-us/p/windows-calculator/9wzdncrfhvn5", "Description": "Microsoft Calculator app" }, Services.json - example Windows Update Service { "Name": "UsoSvc", "VDIState": "Enabled", "Description": "Update Orchestrator service, manages Windows Updates. If stopped, your devices will not be able to download and install the latest updates." }, Prepare to launch Windows PowerShell and select Run as Administrator. In PowerShell, change the directory to the folder to which you downloaded the scripts, e.g. C:\Optimize or your own specific folder. Run the following command: Set-ExecutionPolicy -ExecutionPolicy Bypass Run the Virtual Desktop Optimization Tool using the following command: .\Win10_VirtualDesktop_Optimize.ps1 -WindowsVersion 2004 -Verbose Note: When you use a different version of Windows 10, you must change the WindowsVersion parameter. Version 1803 and later are supported for Windows 10 Enterprise. Windows 10 multi-session support is only available with Windows 10, version 2004 and later. Select Yes when prompted to reboot the session hosts(s). Start your Windows Virtual Desktop session. As you can see in the Task Manager comparison below, the number of threads and handles has decreased noticeably after running the Virtual Desktop Optimization Tool. Do you have any problems with orphaned Start Menu shortcuts after running the tool? Have the user open Task Manager, then end the following two processes: ShellExperienceHost.exe StartMenuExperienceHost.exe Have them check the Start Menu and they should be gone. Happy optimizing! 🙂 Let us know your feedback on the tool in the comment section below. Prefer to watch and learn? There’s also a video on Azure Academy available later this week by Dean Cefola. You can find it here.164KViews11likes41CommentsAnnouncing public preview of Azure Virtual Desktop RDP Shortpath for public networks
Today I have the pleasure of announcing the public preview of RDP Shortpath for public networks. This Remote Desktop Protocol (RDP) feature establishes a direct UDP data flow between the Remote Desktop Client and Session host. RDP uses this data flow to deliver Remote Desktop and RemoteApp. Why does UDP matter? What is wrong with using TCP? Reliability First of all, TCP is an unreliable transport for long-living user sessions. That is right, let me repeat – TCP is unreliable. If you know networking, you might think I'm crazy saying that. But trust me, it's true. TCP is an excellent protocol for guaranteed delivery of small amounts of data. It's easy to implement. Applications like browsers or email clients just send the data and forget about it. They don't need to implement the logic to verify that data is delivered or is delivered in time and with no errors. The protocol will ensure the packet consistency; order, and retry the transmission if delivery fails. However, RDP uses long running connections and long-running TCP connections are problematic. Let me explain this. When Remote Desktop Client establishes the reverse connect session, it consists of two TCP connections, one from the client to the gateway and another from the session host to the same gateway. It looks straightforward, but let's check what is going on over the wire. Let's take the connection from the session host to a gateway as an example. First, Remote Desktop Service opens a local TCP socket on the local network interface. Then, it sends a TCP SYN to the gateway. What happens with the packet? The packet goes out of the Virtual Machine NIC. Then, it travels over the Azure Virtual Network, reaching the NAT gateway, Load Balancer, Azure Firewall, or other NVA. All those virtual elements perform either connection tracking or network address translation, which means that virtual appliances track the status of the TCP connection in memory. Then, after the NAT, the TCP packet travels over the Azure backbone to the Azure Virtual Desktop Gateway. The gateway is not a single big VM. Instead, it is a distributed cluster of applications running on Azure App Service. On the backend, multiple load balancers and firewalls are tracking the TCP session and translating the packet again to the private IP address and port of the App Service instance. And believe me, this is a simplified description. Software-Defined networks perform a lot more translations and packet encapsulations while tracking the state of the connections. On the client side, a similar story. First, the packet is sent to the home router that performs the translation on the client. Then it may pass the packet inspection firewall. At some point, the packet will reach the AVD gateway and pass through those load balancers again. Microsoft is doing a lot to improve the reliability of the Azure part of the path your TCP packet takes, including fault-tolerant load balancers and scalable NAT gateways. However, not all components are in our control. For example, customers deploy force tunneling on-premises, Zero Trust Network services, and use deep packet inspection. This is complicated even more by dynamic routing, VPNs, and software-defined networking setups. Any one of those dozens of physical or virtual appliances on the way of RDP flow may fail or may need to be serviced. In such cases, the TCP session could be dropped. However, such network failures always would come as a surprise. This is because the TCP protocol stack will never report any network errors to the application on the higher level until it reaches the point where the connection is not recoverable. We take this seriously at Azure Virtual Desktop. We have proactive monitoring of the session and a fast reconnect for TCP-based transport. However, even if the sessions are automatically re-established, it takes some time and affects the user experience. The solution comes with using UDP-based transport. First, the tracking of UDP streams is done differently on the load balancers, firewalls, and NAT devices. Second, because of the connectionless nature of UDP, those network devices cannot reset the UDP flow by sending the RST signal. Each packet in the UDP stream is independent of each other and could be lost without affecting the health of the entire flow. Third, UDP is more tolerant to the temporary network interruptions caused by wireless interference or by changes in dynamic routing. UDP does not care about each individual packet's packet order or delivery. It does not have built-in congestion or rate control, which means that if you want to use UDP, you need to implement all of this on your own. And that is what we did by implementing URCP for RDP Shortpath. With this setup, we have better visibility into the network. We see delays in every packet we send and immediately recognize if some data was lost in transit. However, we resend it only if we need to do that. Bandwidth TCP is great for local networks but not on the Internet. Yes, if the packet is lost, it will be retransmitted, but that's not the worst thing that could happen. Bandwidth availability is an essential factor. Unfortunately, TCP congestion control algorithms limit the ability to saturate the network. It is also highly inefficient in window scaling, especially on high latency networks. Knowing the network better and not being protected by TCP algorithms, we can signal back to the RDP stack. This will adjust the encoding parameters or change the frame rate of the graphics stream. This is not news for those who manage VoIP or real-time communications like Teams. Most of those applications use UDP as the primary transfer. Not just graphics is improved by UDP. Your file transfers, print jobs, MMR, and device redirection take advantage of increased bandwidth and reduced latency. In addition, you can now use VoIP applications on your remote desktops even if they have no specific optimizations for VDI environments. Latency So UDP is suitable for RDP, but is UDP enough? Customers implement UDP-based gateways in many on-premises deployments and other virtualization products. Is it good? It's easy to implement. But in the case of the multitenant cloud service like Azure Virtual Desktop, it would require the inbound firewall rules to be configured, which is unacceptable by most customers. On top of that, such a gateway is just another address translation device that acts as a performance bottleneck and reduces the available bandwidth. It also requires packet travel for the gateway location and increases the network latency. Solution We understand the challenges of remote protocols in the cloud. Because of that, when we developed RDP Shortpath, we focused not just on enabling UDP for your user sessions but also on enabling it most efficiently. For this, we focused on establishing a direct UDP flow between client and session host, bypassing all unnecessary gateways. Many of you are familiar with RDP Shortpath for managed networks. IT works great for many customers, with users accessing their remote desktops from the enterprise and office settings. However, the feedback that we hear from you clearly shows that while RDP Shortpath is great for managed networks such as ExpressRoute, it is a non-starter for users who travel or work from their homes. We recognize these challenges, and our protocol team worked hard on the feature released to the public preview today. Meet RDP Shortpath for public networks. Like its oldest brother, this feature establishes direct UDP flow for RDP. However, it does not require any inbound ports to be opened on the firewall. Instead, it will automatically select the network conditions. It uses a combination of NAT traversal protocols such as STUN and UPnP and the process of Interactive Connectivity Establishment (ICE). RDP then would establish the direct UDP flow in most network setups. As a result, your users would get lower latency, better network utilization, and high tolerance to packet loss or network configuration changes. To demonstrate the benefits of RDP Shortpath, I recorded a video that shows the commercial for Microsoft Flight Simulator. I watched the video over two RDP sessions. One with reverse connects TCP transport, another with RDP Shortpath. To keep the setup closer to reality, I used WAN emulator software to introduce a packet loss. For reference, I added the original video to the bottom of the screen. As you can see, UDP, even with a horrible 10% packet loss, gives you smoother playback and better image quality. How does RDP Shortpath work? RDP Shortpath for public networks performs dynamic analysis of your network. It works in many cases, but some configurations are not compatible. For sure, you must have the UDP traffic flowing on your network. But even if UDP is allowed on the network, RDP Shortpath may fail if you use double NAT setups. This includes a Carrier-Grade NAT used by some cellular operators. It also may fail because some firewalls specifically block NAT traversal protocols or are configured to prevent port reuse. In such cases, you may increase the chance of establishing the Shortpath connection by enabling the native IPv6 or using Teredo networking. You may also use Azure load balancer for the outbound network access or assign a public IP address to a VM. There's no need to allow any inbound connectivity in all these cases. No need to open port 3389 or any other port. If RDP Shortpath fails to establish, the user wouldn't notice a thing and will continue to use the TCP -based reverse connection transport. Getting started with RDP Shortpath for public networks You can find information about RDP Shortpath configuration in Azure Virtual Desktop documentation. It also includes recommendations for troubleshooting. Thanks This release results from the work of multiple teams at Microsoft, and I would like to thank all my colleagues for their outstanding work. I am also grateful to all customers and MVPs that participated in the private previews and provided their feedback.24KViews10likes28CommentsARM AVD with Terraform
Deploying Azure Virtual Desktop with Terraform This article has been written in collaboration with my colleaguesJensheerin,Stefan GeorgievandJulie NG. Terraform is a tool that enables you to completely automate infrastructure builds through configuration files. It provides versioning for configurations, which makes it easy to deploy and maintain your existing Azure Virtual Desktop deployments on Microsoft Azure. This article provides an overview of how to use Terraform to deploy a simple Azure Virtual Desktop environment. This is to deploy ARM AVD, not AVD Classic. There are several pre-requisites required to deploy Azure Virtual Desktop, which we will assume are already in place. Ensure that you meet the requirements for Azure Virtual Desktop Terraform must be installed and configured as outlined here. If you are completely new to Azure Virtual Desktop, please check them out here: What is Azure Virtual Desktop? - Azure | Microsoft Docs There are several topics that should be considered when creating a production Azure Virtual Desktop environment, that we haven’t been able to include in the scope of this article, such as security, monitoring, BCDR and image build. This article aims to get you started with building a PoC for Azure Virtual Desktop via Terraform. All the code in this article can be found in the repo: RDS-Templates/wvd-sh/terraform-azurerm-azuresvirtualdesktop at master · Azure/RDS-Templates (github.com) Note: Terraform is an open source tool hosted in GitHub. As such, it is published "as is" with no implied support from Microsoft or any other organization. However, we would like to welcome you to open issues using GitHub issues to collaborate toward future improvements to the tool. AVD Components To deploy AVD we will need to understand what components are required. We’re assuming that your pre-requisites are already in place. Active Directory - in this worked example, we are using ‘on-prem’ AD running on DCs in a separate VNet. The code could easily be modified to use AADDS though. Users in AAD that will be given access to AVD A VM Image (or you can use a marketplace image) Components we will deploy in this article Virtual Desktop Environment Networking Infrastructure Session Hosts Profile Storage Role Based Access Control Our architecture should look like the below once completed (sections in white are pre-reqs, grey will be deployed). Setting up Terraform You’ll need to authenticate to Azure to run the templates – the steps to do that are here: If you want to use Visual Studio Code please have a look at this article. Once your environment is ready, we can start to understand how to deploy all of the required resources. 1. AVD environment First up we will deploy the environment for Azure Virtual Desktop. In this section we will deploy the following resources: Resource Group Workspace Hostpool Hostpool registration expiration date (create a time_rotating resource) Application Group (our DAG) Application Group association to workspace Before we create templates for the resources we need to configure the Azure Provider. To do this we will create a providers.tf file and add the following: terraform { required_providers { azurerm = { source = "hashicorp/azurerm" version = ""~>2.0"" } } } provider "azurerm" { features {} } The full code for provider.tf can be found here. The Terraform documentation for AVD is here: azurerm_virtual_desktop_workspace | Resources | hashicorp/azurerm | Terraform Registry Then to create the resources, first create main.tf and start adding resources in the format: resource "azurerm_virtual_desktop_workspace" "workspace" { name = var.workspace location = var.deploy_location resource_group_name = azurerm_resource_group.rg.name friendly_name = "${var.prefix} Workspace" description = "${var.prefix} Workspace" } Note that there are several dependencies in the order that resources are created. We can specify those using the following depends_on = [azurerm_virtual_desktop_host_pool.hostpool,azurerm_virtual_desktop_workspace.workspace] In this case we are specifying the dependency for provisioning the Desktop Application group – that the hostpool and workspace must already exist before we try to create this resource. We have also referenced some variables in here, so let’s create a variables.tf file for those now and add our variables. They will be in the following form: variable "deploy_location" { type = string default = "West Europe" description = "location" } We will also need to add variables for: Resource group name Prefix (this will be appended to resources such as session hosts) Host pool name A full list of the variables that are referenced are listed at the end of the article in step 7. We can deploy at this point and it will create the basic AVD components, but no session hosts. To add the session hosts, we need to ensure we can access Active Directory. For this example we are assuming that we are using AD rather than AADDS. We are also assuming you have a domain controller in an Azure VNet. 2. Networking infrastructure We will create a new VNet for our session hosts and peer it to our AD VNet. we’ve also included an NSG here with a sample rule – I’d strongly suggest modifying them to meet your own security requirements. Components we will deploy here: Session Host Virtual Network Session Host Subnet NSG NSG – Subnet association VNet Peering to Active Directory VNet The full template can be found in networking.tf The new concept we have here is using data to retrieve the properties of our existing Active Directory (Hub) VNet. We can then pass the ID to new peering we are creating. data "azurerm_virtual_network" "ad_vnet_data" { name = var.ad_vnet resource_group_name = var.ad_rg } resource "azurerm_virtual_network_peering" "peer1" { name = "peer_avd_ad" resource_group_name = azurerm_resource_group.rg.name virtual_network_name = azurerm_virtual_network.vnet.name remote_virtual_network_id = data.azurerm_virtual_network. ad_vnet_data.id } resource "azurerm_virtual_network_peering" "peer2" { name = "peer_avd_ad" resource_group_name = var.ad_rg virtual_network_name = var.ad_vnet remote_virtual_network_id = azurerm_virtual_network.vnet.id } Again, we can see that we have referenced several variables, so you’ll need to add the following to your variables.tf file: ad_vnet – the name of the VNet containing our Domain Controllers ad_rg - resource group containing DCs dns_servers – custom DNS servers that we’re using for our new VNet vnet_range – Address range for our new VNet subnet_range - Address range for our new subnet Now we should have all our basic infrastructure in place, we can move onto the session hosts. 3. Session Hosts Here we are deploying and configuring our session hosts. In this example, we will create a new Terraform config file, host.tf to do this. Full code is here. We will also add our variables to the variables.tf file. Components to deploy in this section: NIC for each session host Session Host VM(s) Domain-join VM extension Dsc VM extension to register session host Random strong for local vm password Local variable for registration token This part is slightly more complex than the infrastructure deployment. The new concepts in this section are covered below. We firstly need to create a local variable for our registration_info token to allow us to register the VM to the host pool. This is later passed as a protected setting to the dsc extension resource. locals { registration_token = azurerm_virtual_desktop_host_pool.hostpool.registration_info[0].token } We’re also creating a random local password – which needs to meet the AVD requirements: resource "random_string" "avd_local_password" { count = "${var.rdsh_count}" length = 16 special = true min_special = 2 override_special = "*!@#?" } In the password section you will see us referencing rdshcount. This allows us to deploy a variable number of VMs to our host pool. Using this counter will be used for the VM, the NICs, local passwords and the extensions. We are also using the count meta-argument to refer to specific instances: resource "azurerm_windows_virtual_machine" "avd_vm" { count = "${var.rdsh_count}" name = "${var.prefix}-${count.index + 1}" resource_group_name = var.rg_name location = var.deploy_location size = var.vm_size network_interface_ids = ["${azurerm_network_interface.AVD_vm_nic.*.id[count.index]}"] provision_vm_agent = true admin_username = "${var.local_admin_username}" admin_password = "${random_string.AVD-local-password.*.result[count.index]}" os_disk { name = "${lower(var.prefix)}-${count.index +1}" caching = "ReadWrite" storage_account_type = "Standard_LRS" } source_image_reference { publisher = "MicrosoftWindowsDesktop" offer = "Windows-10" sku = "20h2-evd" version = "latest" } depends_on = [azurerm_resource_group.rg, azurerm_network_interface.AVD_vm_nic] } The VM resource is also where we specific the source image for the build. If you need a different market place image you can get the image SKU details using: Get-AzVMImageSku -Location <location> -PublisherName MicrosoftWindowsDesktop -Offer windows-10 (or -Offer office-365 if you want the image including M365 apps). Deploying a custom image with the shared image gallery is a topic for a follow up article. The additional variables we need to specify now are: rdsh_count domain_name domain_user_upn domain_password vm_size ou_path local_admin_username 4. Profile Storage For this example we’ll deploy our profile storage using Azure Files. Step 6 has the steps to configure NetApp Files if you prefer this option. To do this we’ll need to deploy the following resources: A dedicated resource group for our Storage account Azure File Storage account Azure Storage Share Assign AAD group to the Storage (Storage File Data SMB Share Contributor) We will deploy a new resource group. We are using a random string to generate a globally unique name for our Storage account. We are creating a file called afstorage.tf for this (and the full code is included here). We are appending a random string to the storage account name to ensure uniqueness – as such we also use the output command so that we can see the name of our new storage account. We can use the outputs.tf file to define our outputs. output "storage_account_name" { value = azurerm_storage_account.storage.name } Further configuration will be needed to enable AD authentication if you choose that direction and to configure NTFS permissions of SMB 5. RBAC Now that we have all our infrastructure deployed, let us give our users access. Again, we will create a new config file for this – rbac.tf. This can also be modified to assign users to custom roles, or to the other desktop virtualization roles that are already built in: Built-in roles Azure Virtual Desktop - Azure | Microsoft Docs The components we’re creating here are: Azure Active Directory Group AAD group member AAD role assignment Before we start, we’ll need to add the azuread provider to our list of required providers in our provider.tf as we need to use this for some of the AAD resources. terraform { required_providers { azurerm = { source = "hashicorp/azurerm" version = "~>2.0" } azuread = { source = "hashicorp/azuread" } } } To assign the RBAC permissions we need to pass a list of existing AAD Users and then add these to a new AAD group that we are creating. We will use the AzureAD_user and azurerm_role_definition datasources to retrieve information about our users and the role we’re assigning (in this case the builtin Desktop Virtualization user). data "azuread_user" "aad_user" { for_each = toset(var.avd_users) user_principal_name = format("%s", each.key) } data "azurerm_role_definition" "role_def" { name = "Desktop Virtualization User" } We’re also going to use for_each to loop through that list of users (both when getting the UPN from AAD and when adding to the group). resource "azuread_group_member" "aad_group_member" { for_each = data.azuread_user.aad_user group_object_id = azuread_group.aad_group.id member_object_id = each.value["id"] } Lastly we’ll scope the role assignment to the application group we created at the start and apply it to the group containing our users. resource "azurerm_role_assignment" "role" { scope = azurerm_virtual_desktop_application_group.dag.id role_definition_id = data.azurerm_role_definition.role_def.id principal_id = azuread_group.aad_group.id } We also need to add the 2 new variables: aad_group_name avd_users avd_users will be an array to allow us to pass multiple users. Up to now we have either specified default values for our variables or will pass them during deployment. To make things simpler we will create an env.tfvars file to pass our environment specific variables. You can add as many (or few) pre-configured variables here, but keep security in mind if you are putting confidential data in there. A sample might look like: deploy_location = "west europe" rg_name = "avd-resources-rg" vnet_range = ["10.1.0.0/16"] subnet_range = ["10.1.0.0/24"] prefix = "avd" avd_users = [ "user1@<domain>.com", "user2@<domain>.com" ] dns_servers = ["10.0.0.4", "168.63.129.16"] 6. NetApp Storage As an alternate to Azure Files you also have the option to deploy NetApp Storage for Azure Virtual Desktop profiles. To use NetApp Files you need to request access Register for Azure NetApp Files | Microsoft Docs. To deploy the storage we’ll need to deploy the following resources: A dedicated subnet for NetApp NetApp storage Account NetApp storage Pool NetApp storage Volume For simplicity we’ll deploy our subnet to the same Vnet we created earlier, and will use the same resource group and location variables. You may want separate resource groups and/or more complex networking in a production deployment. We are creating a file called netappstorage.tf for this and the full code can be found in the folder options/netapp. We also need to add some new variables (and you’ll probably want to update the default values as well): NetApp_acct_name NetApp_pool_name NetApp_volume_name NetApp_smb_name NetApp_volume_path NetApp_subnet_name NetApp_Address Now we should have created 9 Files: Main.tf Networking.tf Host.tf afstorage.tf (or netappstorage.tf) Rbac.tf Variables.tf defaults.tfvars Outputs.tf Providers.tf 7. Variables All of the variables that we have referenced so far are described here (they are also in variables.tf) Name Description Default rg_name Name of the Resource Group in which to deploy these resources AVD-TF deploy_location West Europe hostpool Name of the Azure Virtual Desktop host pool AVD-TF-HP ad_vnet Name of domain controller VNet - dns_servers Custom DNS configuration - vnet_range Address range for deployment VNet - subnet_range Address range for session host subnet - avd_users The resource group for AD VM [] aad_group_name Azure Active Directory Group for AVD users - rdsh_count Number of AVD machines to deploy 2 prefix Prefix of the name of the AVD machine(s) - domain_name Name of the domain to join - domain_user_upn Username for domain join (do not include domain name as this is appended - domain_password Password of the user to authenticate with the domain - vm_size Size of the machine to deploy Standard_DS2_v2 ou_path The ou path for AD "" local_admin_username The local admin username for the VM - netapp_acct_name The NetApp account name AVD_NetApp netapp_pool_name The NetApp pool name AVD_NetApp_pool netappvolumename The NetApp volume name AVD_NetApp_volume netapp_smb_name The NetApp smb name AVDNetApp netapp_volume_path The NetApp volume path AVDNetAppVolume netapp_subnet_name The NetApp subnet name NetAppSubnet netapp_address The Address range for NetApp Subnet - Note: Variables in italic are optional and only needed if you are deploying NetApp Files, these are included only in the variables template in the netapp folder. 8. Deploy! The templates can be downloaded from github if you now want to deploy this yourself. There are also some additional configuration files for other functionality that we hope to cover in further articles soon. Once Terraform is setup and you have created your Terraform templates, the first step is to initialize Terraform. This step ensures that Terraform has all the prerequisites to build your template in Azure. terraform init The next step is to have Terraform review and validate the template. This step compares the requested resources to the state information saved by Terraform and then outputs the planned execution. The Azure resources aren't created at this point. An execution plan is generated and stored in the file specified by the -out parameter. We also need to pass our variable definitions file during the plan. We can either load it automatically by renaming env.tfvars as terraform.tfvars OR env.auto.tfvars. We then use the following to create the execution plan: terraform plan -out terraform_azure.tfplan If you don’t rename your variable file, use: terraform plan -var-file defaults.tfvars -out terraform_azure.tfplan Note:When you're ready to build the infrastructure in Azure, apply the execution plan - this will deploy the resources: terraform apply terraform_azure.tfplan If you update the templates after you have deployed, you will need to rerun the plan and apply steps for them to reflect in Azure. Troubleshooting Terraform deployment Terraform deployment can fail in three main categories: Issues with Terraform code Issues with Desired State Configuration (DSC) Conflict with existing resources Issues with Terraform code While it is rare to have issues with the Terraform code it is still possible, however most often errors are due to bad input in variables.tf. If there are errors in the Terraform code, please file a GitHub issue. If there are warning in the Terraform code feel free to ignore or address for your own instance of that code. Using Terraform error messages it's a good starting point towards identifying issues with input variables Issues with DSC To troubleshoot this type of issue, navigate to the Azure portal and if needed reset the password on the VM that failed DSC. Once you are able to log in to the VM review the log files following the guidance here: Troubleshooting DSC - PowerShell | Microsoft Docs Conflict with Existing resources If you have previously deployed resources with the same name, you may see a deployment failure. Deployment will stop if any failures occur. You can use: Terraform destroy To clean up resources that were created by the Terraform Apply command. You can pass it the same options as the apply command. The destroy command may fail trying to delete the subnet if associated resources have not been deleted first. In this case you may need to manually delete resources associated with the subnet before running destroy, or you can delete the whole resource group manually. 9. Final Configuration You’ll notice we didn’t configure the session hosts to use our profile storage at any point. There is an assumption that we are using GPO to manage FSLogix across our host pools as documented here: Use FSLogix Group Policy Template Files - FSLogix | Microsoft Docs. At a minimum you’ll need to configure the registry keys to enable FSLogix and configure the VHD Location to the NetApp Share URI: Profile Container registry configuration settings - FSLogix | Microsoft Docs If not using GPO, the registry keys could be manually added as part of the build to the session host. Please comment below if you have any questions or feedback!17KViews10likes2CommentsAzure Site Recovery between Azure regions in public preview
Today we are extending Azure Site Recovery to support the failover of applications running within Azure. The set of features customers have used for replication and disaster recovery from on-premises to Azure are now available from one Azure region to another. Customers can create recovery plans between Azure regions, test failovers between Azure regions and replicate their applications to any other Azure region. You can set up Azure to Azure site recovery in a few minutes and have confidence that Azure meets your compliance needs. To learn more, check out our blog and documentation.1.3KViews9likes1Comment
Events
Recent Blogs
- Introduction In today's fast-paced business environment, meetings are essential but often leave teams scrambling to document discussions and action items. Many organizations struggle with inconsist...Nov 29, 2024131Views1like0Comments
- Microsoft Ignite 2024 has been a showcase of innovation across the Azure ecosystem, bringing forward major advancements in AI, cloud-native applications, and hybrid cloud solutions. This year’s event...Nov 29, 20241KViews4likes0Comments