Pinned Posts
Forum Widgets
Latest Discussions
How to register an azure pipeline from a Github Repository using Azure Devops API
Hello! I'm working on automating Azure Pipeline registrations. (to work similarly to GitHub actions) Our scenario is: Our repositories are located on Github Yes, the service connection is there. I can use it normally from Azure DevOps Web Our pipeline definition is located in a file in the path automation/pipeline.yaml for each repository. The question is, how I can use the Azure Devops API to do it? I'm checking the documentation here: https://learn.microsoft.com/en-us/rest/api/azure/devops/pipelines/pipelines/create?view=azure-devops-rest-7.0 But I didn't find anything relevant. The descriptions of the parameters don't say much about how to configure the repository etc. Am I missing any necessary documentation? Any directions on how to do it?rfrezinoJul 19, 2025Copper Contributor638Views0likes1CommentRequire latest target branch to be merged as PR check
Hi, we are using feature branches actively. Each branch is merged via a PR. For a to PR to be merged we run extensive build validations (e.g. build the solution, deploy it, execute it, destroy it). However, some branches take a bit of time to complete. In this case the have branched of the master several days/weeks ago. This means that in the meanwhile lots of other PRs have overtaken and the master has progressed. I want every branch to be updated with its target branch and otherwise a build validation to fail. It seems this has been requested at: - https://github.com/MicrosoftDocs/azure-devops-docs/issues/8083 - https://stackoverflow.com/questions/64029333/vsts-how-to-require-a-branch-to-be-up-to-date-before-merging-doing-pull-reques However, I am wondering why there is no documented solution today? Do I miss something? Is it overrated because in Azure DevOps PR triggered runs are actually not validating my branch but the result of a merge of my branch with the target branch? Thanks for your support FelixazdataguyJul 19, 2025Copper Contributor1.4KViews0likes1CommentBuilt a Real-Time Azure AI + AKS + DevOps Project – Looking for Feedback
Hi everyone, I recently completed a real-time project using Microsoft Azure services to build a cloud-native healthcare monitoring system. The key services used include: Azure AI (Cognitive Services, OpenAI) Azure Kubernetes Service (AKS) Azure DevOps and GitHub Actions Azure Monitor, Key Vault, API Management, and others The project focuses on real-time health risk prediction using simulated sensor data. It's built with containerized microservices, infrastructure as code, and end-to-end automation. GitHub link (with source code and documentation): https://github.com/kavin3021/AI-Driven-Predictive-Healthcare-Ecosystem I would really appreciate your feedback or suggestions to improve the solution. Thank you!KavindhiranJul 18, 2025Occasional Reader6Views0likes0CommentsAuthenticate Azure Repositories in Pipelines
Hi, I'm trying to use Julia's LocalRegistry with Azure DevOps. LocalRegistry is basically a Git Repository with references to other Git repositories. In Azure DevOps I can checkout additional repositories using the following syntax: resources: repositories: - repository: ProjectA type: git name: ProjectA/GitA (...) - checkout: ProjectA However, Julia's LocalRegistry just uses the direct git repo URL and uses an internal git manager to pull the repo and find references. So, per design, I don't use the checkout-feature from DevOps but let Julia clone the Git repo internally. For this step, I can just put a PAT (and here, the SystemAccessToken is not working for me?), put it in the Git-Repo URL and use this for the LocalRegistry. However, I can't include a PAT into the Git-URL-References on the Registry Repo. Thus, Julias LocalRegistry can successfully obtain a copy of the current index, but it fails when it comes to actually pull other projects using the Package Manager with the following error message: error: GitError(Code:EUSER, Class:Callback, Aborting, user cancelled credential request.) What could I do here? How can I add the required credentials?AhoeckJul 18, 2025Copper Contributor385Views0likes2CommentsHow to deploy n8n on Azure App Service and leverage the benefits provided by Azure.
Lately, n8n has been gaining serious traction in the automation world—and it’s easy to see why. With its open-source core, visual workflow builder, and endless integration capabilities, it has become a favorite for developers and tech teams looking to automate processes without being locked into a single vendor. Given all the buzz, I thought it would be the perfect time to share a practical way to run n8n on Microsoft Azure using App Service. Why? Because Azure offers a solid, scalable, and secure platform that makes deployment easy, while still giving you full control over your container and configurations. Whether you're building a quick demo or setting up a production-ready instance, Azure App Service brings a lot of advantages to the table—like simplified scaling, integrated monitoring, built-in security features, and seamless CI/CD support. In this post, I’ll walk you through how to get your own n8n instance up and running on Azure—from creating the resource group to setting up environment variables and deploying the container. If you're into low-code automation and cloud-native solutions, this is a great way to combine both worlds. The first step is to create our Resource Group (RG); in my case, I will name it "n8n-rg". Now we proceed to create the App Service. At this point, it's important to select the appropriate configuration depending on your needs—for example, whether or not you want to include a database. If you choose to include one, Azure will handle the connections for you, and you can select from various types. In my case, I will proceed without a database. Proceed to configure the instance details. First, select the instance name, the 'Publish' option, and the 'Operating System'. In this case, it is important to choose 'Publish: Container', set the operating system to Linux, and most importantly select the region closest to you or your clients. Service Plan configuration. Here, you should select the plan based on your specific needs. Keep in mind that we are using a PaaS offering, which means that underlying compute resources like CPU and RAM are still being utilized. Depending on the expected workload, you can choose the most appropriate plan. Secondly—and very importantly—consider the features offered by each tier, such as redundancy, backup, autoscaling, custom domains, etc. In my case, I will use the Basic B1 plan. In the Database section, we do not select any option. Remember that this will depend on your specific requirements. In the Container section, under 'Image Source', select 'Other container registries'. For production environments, I recommend using Azure Container Registry (ACR) and pulling the n8n image from there. Now we will configure the Docker Hub options. This step is related to the previous one, as the available options vary depending on the image source. In our case, we will use the public n8n image from Docker Hub, so we select 'Public' and proceed to fill in the required fields: the first being the server, and the second the image name. This step is very important—use the exact same values to avoid issues. In the Networking section, we will select the values as shown in the image. This configuration will depend on your specific use case—particularly whether to enable Virtual Network (VNet) integration or not. VNet integration is typically used when the App Service needs to securely communicate with private resources (such as databases, APIs, or services) that reside within an Azure Virtual Network. Since this is a demo environment, we will leave the default settings without enabling VNet integration. In the 'Monitoring and Security' section, it is essential to enable these features to ensure traceability, observability, and additional security layers. This is considered a minimum requirement in production environments. At the very least, make sure to enable Application Insights by selecting 'Yes'. Finally, click on 'Create' and wait for the deployment process to complete. Now we will 'stop' our Web App, as we need to make some preliminary modifications. To do this, go to the main overview page of the Web App and click on 'Stop'. In the same Web App overview page, navigate through the left-hand panel to the 'Settings' section. Once there, click on it and select 'Environment Variables'. Environment variables are key-value pairs used to configure the behavior of your application without changing the source code. In the case of n8n, they are essential for defining authentication, webhook behavior, port configuration, timezone settings, and more. Environment variables within Azure specifically in Web Apps function the same way as they do outside of Azure. They allow you to configure your application's behavior without modifying the source code. In this case, we will add the following variables required for n8n to operate properly. Note: The variable APP_SERVICE_STORAGE should only be modified by setting it to true. Once the environment variables have been added, proceed to save them by clicking 'Apply' and confirming the changes. A confirmation dialog will appear to finalize the operation. Restart the Web App. This second startup may take longer than usual, typically around 5 to 7 minutes, as the environment initializes with the new configuration. Now, as we can see, the application has loaded successfully, and we can start using our own n8n server hosted on Azure. As you can observe, it references the host configured in the App Service. I hope you found this guide helpful and that it serves as a useful resource for deploying n8n on Azure App Service. If you have any questions or need further clarification, feel free to reach out—I'd be happy to help.705Views2likes5CommentsFreeze column headers in Azure DevOps Sprint Taskboard
Hello, Is there any way to keep the column headers in view while scrolling down the Azure DevOps Sprint Taskboard ? 1. Open a project in Azure DevOps and select Sprints in the left menu bar 2. Go to a Sprint's taskboard and scroll down 3. Notice that the Lane headers (New, In Progress, Resolved, Closed) are no longer visible; the user needs to scroll up in order to see in which column (or lane) a task resides.gabicrecanJul 17, 2025Copper Contributor721Views0likes1CommentAzure VM Windows Server 2022 Domain Joining Issue
We have multiple Windows Server 2022 VMs in a dedicated Resource Group, created as per best practices for each engagement. All firewall roles, VNet, routing, and NSGs are configured, with Azure Firewall set up to communicate with the on-premises Active Directory. Telnet, nslookup, and ping tests are successful, but attempts to join the domain result in an error stating the network path object is no longer available. Any recommendation for effective troubleshooting steps?61Views0likes3CommentsUpgrading a basic public IP address to Standard SKU for Azure Express Route Virtual Network Gateway
hi, There is a well known announcement from MS that the public IP basic sku is going to be retired in end Sep 2025. I have a Express Route Virtual Network Gateway with basic public IP. The Express Route Virtual Network Gateway has "Standard" SKU, which is a non-Az enabled Gateway SKU. I read the some of the guides in MS, but there is some conflicting information on the migration. (1) In https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/public-ip-basic-upgrade-guidance It says "New ExpressRoute Gateway is required. Follow the ExpressRoute Gateway migration guidance for upgrading from Basic to Standard SKU." (2) Then in https://learn.microsoft.com/en-us/azure/expressroute/gateway-migration The guided gateway migration experience supports: Non-Az-enabled SKU on Basic IP to Non-az enabled SKU on Standard IP. Non-Az-enabled SKU on Basic IP to Az-enabled SKU on Standard IP. Non-Az-enabled SKU on Standard IP to Az-enabled SKU on Standard IP. Notice that the first point say that for non-Az-enabled SKU, we can just upgrade the public IP from basic to standard sku. So, i am not sure if (1) a new gateway is required or (2) we just press the link to upgrade basic IP to standard IP ? I don't have a spare lab environment to test this for Express Route, so it can be very dangerous in production to do this action without any understanding or fall back plans. Please help.Khoon Yong ChuaJul 16, 2025Copper Contributor1.1KViews1like6CommentsMigrating Builds from TFS 2017 to DevOps Server 2022, a few questions...
Hi all, We are going to be moving to DevOps Server 2022 and our on-prem build definitions will have to be converted to the infrastructure as code, YAML format. The question(s) I have relate to getting started. Currently, with TFS, I just choose New Definition, add/configure the steps or tasks and away the build goes. With the new format, it is my understanding that the build definition is now a YAML script file that will be kept in source control. How do I get started creating this file and where do I store it in source control, or is that all automatically done with a New Definition option? Once I figure that out, I'll be recreating our Definitions with the Classic option for task configuration until I'm up to speed with YAML to script on the fly. Any information or help is appreciated! Thanks!!MaWa316Jul 16, 2025Copper Contributor853Views0likes1CommentNon-SaaS Product GIT Branching Strategy
Dear Team, What’s your recommended approach? A non-SaaS product Two repos - Backend and Frontend Current Approach - Dev, QA and Prod Branches Sprint branch (can’t go with feature branch as multiple unlimited APIs and multiple user stories will impact the same set of APIs) created out of Dev and merged into Dev at end of sprint Post each Sprint Dev branch tagged and PR into QA branch Customers are given Docker images generated out of specific tags from QA branch Now comes the fun part- Say customer 1 on Tag v4.3.0, customer 2 on Tag v4.4.0 and product last release is Tag v4.5.0. Current active sprint once complete would be v4.6.0 Developers currently working on active sprint branch for v4.6.0 Bug 1 reported by customer 1 in v4.3.0 Bug 2 reported by customer 2 in v4.4.0 I can extract the specific tag code, make the changes, then manually make the changes in other tags and release to those customers if common bug, or else manually make the changes in active branch also so that next release it’s not missed What if Bug 1 is a Feature for customer 2 who doesn’t need it? So where will I store these changes? Which branch? I want to avoid having customer specific branches as it becomes a big overhead. Suggestions welcome!GeorgeAbrahamJul 16, 2025Copper Contributor964Views0likes1Comment
Resources
Tags
- azure2,279 Topics
- azure devops1,392 Topics
- Data & Storage379 Topics
- Networking240 Topics
- Azure Friday223 Topics
- App Services203 Topics
- devops171 Topics
- blockchain168 Topics
- Security & Compliance151 Topics
- analytics136 Topics