Recent Discussions
Automatically setting an item's state upon pull request merge
Hi there, I have setup a number of states in Azure Devops related to both development and QA. When a pull request is approved and merged into main branch, is there a way I can specify which state the user story or bug is placed into? I have a "Ready for QA" state that I would like it placed into. The system appears to be doing this at the moment, but it always goes to a completed state. I can't see any settings surrounding this. Please help, thanks Mark.776Views0likes1CommentAKS Log Analytics Workspace records the log for only 3 hours problem
Dear AKS service provider, My AKS Log Analytics Workspace only records from 13:30 UTC to 16:30 UTC container logs each day in the "ContainerLogV2". I wonder what config causes it. How to diagnose and revive to whole day logs? How to troubleshoot? I am sure the container is working properly and has a full log for the day.23Views0likes1CommentAzure DevOps Releases are failing for all the repo all of sudden
Before 1st of Sept all the release succeeded without any issue , but all of sudden all the release pipelines are failing due to the below error. Error: 2025-09-05T05:52:56.8522017Z error: error parsing STDIN: error converting YAML to JSON: yaml: mapping values are not allowed in this context What could be the issue ?? Can anyone come across this issue, Please suggest.15Views0likes1CommentAVD remote desktop app showing on screen but not responding to clicks until you minimize/maximize
Over the past couple months and more recently we have had users report issues when opening up office apps as well as other non Microsoft apps from the remote desktop client to access AVD apps. The window for the application will appear on screen but not respond to anything until you shift+right click on the app icon in the taskbar to minimize and then maximize it again. We are on version 1.2.4157 of remote desktop. Anyone else experiencing this issue?8.3KViews0likes15CommentsHow to deploy n8n on Azure App Service and leverage the benefits provided by Azure.
Lately, n8n has been gaining serious traction in the automation world—and it’s easy to see why. With its open-source core, visual workflow builder, and endless integration capabilities, it has become a favorite for developers and tech teams looking to automate processes without being locked into a single vendor. Given all the buzz, I thought it would be the perfect time to share a practical way to run n8n on Microsoft Azure using App Service. Why? Because Azure offers a solid, scalable, and secure platform that makes deployment easy, while still giving you full control over your container and configurations. Whether you're building a quick demo or setting up a production-ready instance, Azure App Service brings a lot of advantages to the table—like simplified scaling, integrated monitoring, built-in security features, and seamless CI/CD support. In this post, I’ll walk you through how to get your own n8n instance up and running on Azure—from creating the resource group to setting up environment variables and deploying the container. If you're into low-code automation and cloud-native solutions, this is a great way to combine both worlds. The first step is to create our Resource Group (RG); in my case, I will name it "n8n-rg". Now we proceed to create the App Service. At this point, it's important to select the appropriate configuration depending on your needs—for example, whether or not you want to include a database. If you choose to include one, Azure will handle the connections for you, and you can select from various types. In my case, I will proceed without a database. Proceed to configure the instance details. First, select the instance name, the 'Publish' option, and the 'Operating System'. In this case, it is important to choose 'Publish: Container', set the operating system to Linux, and most importantly select the region closest to you or your clients. Service Plan configuration. Here, you should select the plan based on your specific needs. Keep in mind that we are using a PaaS offering, which means that underlying compute resources like CPU and RAM are still being utilized. Depending on the expected workload, you can choose the most appropriate plan. Secondly—and very importantly—consider the features offered by each tier, such as redundancy, backup, autoscaling, custom domains, etc. In my case, I will use the Basic B1 plan. In the Database section, we do not select any option. Remember that this will depend on your specific requirements. In the Container section, under 'Image Source', select 'Other container registries'. For production environments, I recommend using Azure Container Registry (ACR) and pulling the n8n image from there. Now we will configure the Docker Hub options. This step is related to the previous one, as the available options vary depending on the image source. In our case, we will use the public n8n image from Docker Hub, so we select 'Public' and proceed to fill in the required fields: the first being the server, and the second the image name. This step is very important—use the exact same values to avoid issues. In the Networking section, we will select the values as shown in the image. This configuration will depend on your specific use case—particularly whether to enable Virtual Network (VNet) integration or not. VNet integration is typically used when the App Service needs to securely communicate with private resources (such as databases, APIs, or services) that reside within an Azure Virtual Network. Since this is a demo environment, we will leave the default settings without enabling VNet integration. In the 'Monitoring and Security' section, it is essential to enable these features to ensure traceability, observability, and additional security layers. This is considered a minimum requirement in production environments. At the very least, make sure to enable Application Insights by selecting 'Yes'. Finally, click on 'Create' and wait for the deployment process to complete. Now we will 'stop' our Web App, as we need to make some preliminary modifications. To do this, go to the main overview page of the Web App and click on 'Stop'. In the same Web App overview page, navigate through the left-hand panel to the 'Settings' section. Once there, click on it and select 'Environment Variables'. Environment variables are key-value pairs used to configure the behavior of your application without changing the source code. In the case of n8n, they are essential for defining authentication, webhook behavior, port configuration, timezone settings, and more. Environment variables within Azure specifically in Web Apps function the same way as they do outside of Azure. They allow you to configure your application's behavior without modifying the source code. In this case, we will add the following variables required for n8n to operate properly. Note: The variable APP_SERVICE_STORAGE should only be modified by setting it to true. Once the environment variables have been added, proceed to save them by clicking 'Apply' and confirming the changes. A confirmation dialog will appear to finalize the operation. Restart the Web App. This second startup may take longer than usual, typically around 5 to 7 minutes, as the environment initializes with the new configuration. Now, as we can see, the application has loaded successfully, and we can start using our own n8n server hosted on Azure. As you can observe, it references the host configured in the App Service. I hope you found this guide helpful and that it serves as a useful resource for deploying n8n on Azure App Service. If you have any questions or need further clarification, feel free to reach out—I'd be happy to help.2.7KViews4likes8CommentsAzure Pipline
Greetings, I have created a simple website using Visual Studio and I am looking to deploy it to Azure services via Azure Pipeline. Unfortunately, I am experiencing errors during the deployment phase of my YAML script and I require some assistance. If you have a moment, could you kindly respond to coordinate a Zoom meeting where we can review the issue together? Thank you kindly.675Views0likes1CommentAzure Load Test Pricing
H , This is regarding Azure load testing pricing. Please advice. Virtual User Hour (VUH) usage 0 - 10,000 Virtual User Hours - $0.15/VUH10,000+ Virtual User Hours - $0.06/VUH Virtual User Hour (VUH) usage 0 - 10,000 Virtual User Hours - $0.15/VUH10,000+ Virtual User Hours - $0.06/VUH I am trying to understand above pricing. Lets say i want to run a test with 10k users just to login my website. This will max take 10 sec to complete. How will the pricing gets calculated? Regards, Sharukh40Views0likes3CommentsRelease management on Azure devops dashboard
Hello, Everyone! I've been struggling to promote the feature's release. We have three product teams, and I am responsible for releasing features to approximately 16 countries. As a result, the key account managers and relationship managers of the various regions will always ping me to inquire about the release dates of their region and regions to be released on that date. Since I have ADO, how can I establish boards for each region to show them the future and release date? It should, for example, display them the progress and inform them that it will be launched on this exact date. Can you tell me how I can increase visibility using the ADO board to highlight release management? Thanks,1KViews0likes1CommentChanging the Backlog Iteration
Hello, all. We're thinking of creating a new backlog iteration for our team and setting it as the default backlog. We would keep the "old" backlog with closed stories, etc. intact. Would doing something like this pose any risks to historical sprint data or are there unintended consequences that should be considered?901Views0likes1CommentService Discovery in Azure Dynamically Finding Service Instances
Modern cloud-native applications are built from microservices—independently deployable units that must communicate with each other to form a cohesive system. In dynamic environments like Azure Kubernetes Service (AKS), Azure App Service, or Azure Container Apps, service instances can scale up, scale down, or move across nodes at any time. This creates a challenge: How do services reliably find and talk to each other without hardcoding IP addresses or endpoints? The answer lies in the Service Discovery architecture pattern. https://dellenny.com/service-discovery-in-azure-dynamically-finding-service-instances/14Views0likes0CommentsCommon Security & Governance Blind Spots in Azure Integration
"Hello everyone, I'm starting a discussion to gather insights on a critical topic: security and governance for Azure Integration Services (AIS). As environments grow with dozens of Logic Apps, Functions, APIM instances, etc., it becomes harder to maintain a strong security posture. I’d like to hear from your experience: What are the most common security and governance blind spots people miss when building out their integration platforms on Azure? To get us started, here are a few areas I'm thinking about: Secret Management: Beyond just "use Key Vault," what are the subtle mistakes or challenges teams face? Network Security: How critical is VNet integration and the use of Private Endpoints for services like Service Bus and Storage Accounts in your opinion? When is it overkill? Monitoring & Observability: What are the best ways to get a single, unified view of a business transaction that flows through multiple Azure services for security auditing? Looking forward to a great discussion and learning from the community's collective experience!"32Views0likes0CommentsProblem with output variables in Self Hosted Agent
I have the following sample code that I implement using Azure DevOps pipelines and it works without problem, I can see the value of the SAUCE variable in all the tasks in my Job. Then I run it but using a self hosted Agent that I have in an Azure virtual machine and it doesn't work anymore, for some reason the value of the variable is lost. - job: TestOutputVars displayName: Test Output Variables steps: - bash: | echo "##vso[task.setvariable variable=sauce;isOutput=true]crushed tomatoes" echo "my environment variable is $SAUCE" - bash: | echo "my environment variable is $SAUCE" - task: PowerShell@2 inputs: targetType: "inline" script: | Write-Host "my environment variable is $env:SAUCE" pwsh: true What could be the problem, or should I do some specific configuration in the virtual machine I am using as agent ?781Views0likes1CommentDelete Old IRM Labels
Hi, We just started using Purview and I want to set up Sensitivity Labels to protect information. Currently there are no sensitivity labels set up or visible in Purview. However, in Office Apps I can still see some Rights Management protection labels which were set up 10 years ago or more. I think these may have been set using AD RMS in our online Microsoft domain but I am not sure. They were never used and there are no documents protected using these labels. (To explain where I can see them: in Excel for example they are listed under File > Protect Workbook > Restrict Access) I would like to get rid of these old labels so we can start clean using new sensitivity labels in Purview, but I can't find them listed anywhere and I can't find any articles that seem to cover this. I would be very grateful if anyone could explain how to list and hopefully delete these old labels so we can start fresh. Many thanks.42Views0likes2CommentsCreate tilelayer using a GDAL Tiler tilematrix set
I'm porting an application from Bing maps which used many tilesets generated using the GDAL tiler. In the Bing Maps API these layers needed a custom url which modified the y component. So where Azure Maps uses the the option tileUrl:url I need something like ... This works in Bing ..... uriConstructor:function(coord){ // coord is a PyramidTileId with : z,y,zoom,pixwidth,pixheight. var ymax=1<<coord.zoom; var y=ymax-coord.y-1; return url.replace('{x}',coord.x).replace('{y}',y).replace('{z}',coord.zoom); } I do so hope there is someone out there with some cool code to get me out of a very tight spot. Thanks Steve47Views0likes3CommentsCustom Windows Server Standard VM on Azure: It Works, But Is It Licensing Compliant?
Hi everyone, I wanted to share a recent technical experience where I successfully created and deployed a Windows Server Standard VM on Azure using a fully custom image. I started by downloading the official Windows Server Standard Evaluation ISO. I created a Generation 2 VM in Hyper-V and completed the OS setup using the Desktop Experience edition. Once the configuration was done, I ran sysprep to generalize the image. After that, I converted the disk from VHDX to VHD in fixed format, which turned out to be a critical step because Azure does not accept dynamic disks. The resulting file was around 127 GB, so I uploaded it to a premium storage account container to ensure performance. From there, I created a Generation 2 image in Azure and deployed a new VM from it. I then activated the Standard edition with a valid product key. Everything worked smoothly, but I’m still unsure whether this method is fully compliant with Microsoft’s licensing policies. Specifically, I’m trying to understand if going from an Evaluation ISO to sysprep, upload, deployment, and activation in Azure is a valid and compliant scenario when not using BYOL with Software Assurance or a CSP license. Has anyone gone through this process or has any insights on the compliance aspect? Thanks in advance for any guidance or clarification.Pipeline is running test cases twice after moving to new folder
Hi I'm running test cases (written i SpecFlow/C#) on Azure Pipeline connected to an Azure agent on an on-premise PC. I've recently moved out some feature-files (test cases) to a subfolder and it works great when i run locally. But when I run the same code in the pipeline the moved test cases are being run twice. I've disabled some test cases (by commenting out the content in the feature file) and run the pipeline, which strangely still running the disabled test case (but only once this time). It seems that Azure uses some cashed data when running and doesn't really notice all committed changes. Have any of you seen that problem before?726Views0likes1CommentEnhance YAML Pipelines 'Changes' API Endpoint to allow user to specify the 'artifact'
There exists an API endpoint that allows a user to request the changes for a Classic Release run and pass the Artifact Alias that they would like the changes from: https://vsrm.dev.azure.com/{collection}/{project}/_apis/Release/releases/{releaseId}/changes?artifactAlias={artifactAlias} If a team replicates this type of pattern in YAML pipelines where there is a build pipeline (or multiple) and then a multi-stage YAML 'release' pipeline, there is no way to get the changes from the build artifacts. You can request the changes from the API for the multi-stage 'release' pipeline, but you cannot get the changes for the build pipelines IN THE CONTEXT OF the currently releasing stage. The build artifacts are specified in the release pipeline in the resources: resources: pipelines: - pipeline: myBuildPipeline source: myBuildPipeline trigger: branches: include: - main569Views0likes1Comment
Events
Recent Blogs
- Introduction: The Evolution of AI-Powered App Service Applications Over the past few months, we've been exploring how to supercharge existing Azure App Service applications with AI capabilities. If...Sep 05, 2025132Views1like0Comments
- What’s New? The updated experience introduces a dedicated App Service Quota blade in the Azure portal, offering a streamlined and intuitive interface to: View current usage and limits across th...Sep 04, 2025237Views1like0Comments