Recent Discussions
How to deploy n8n on Azure App Service and leverage the benefits provided by Azure.
Lately, n8n has been gaining serious traction in the automation world—and it’s easy to see why. With its open-source core, visual workflow builder, and endless integration capabilities, it has become a favorite for developers and tech teams looking to automate processes without being locked into a single vendor. Given all the buzz, I thought it would be the perfect time to share a practical way to run n8n on Microsoft Azure using App Service. Why? Because Azure offers a solid, scalable, and secure platform that makes deployment easy, while still giving you full control over your container and configurations. Whether you're building a quick demo or setting up a production-ready instance, Azure App Service brings a lot of advantages to the table—like simplified scaling, integrated monitoring, built-in security features, and seamless CI/CD support. In this post, I’ll walk you through how to get your own n8n instance up and running on Azure—from creating the resource group to setting up environment variables and deploying the container. If you're into low-code automation and cloud-native solutions, this is a great way to combine both worlds. The first step is to create our Resource Group (RG); in my case, I will name it "n8n-rg". Now we proceed to create the App Service. At this point, it's important to select the appropriate configuration depending on your needs—for example, whether or not you want to include a database. If you choose to include one, Azure will handle the connections for you, and you can select from various types. In my case, I will proceed without a database. Proceed to configure the instance details. First, select the instance name, the 'Publish' option, and the 'Operating System'. In this case, it is important to choose 'Publish: Container', set the operating system to Linux, and most importantly select the region closest to you or your clients. Service Plan configuration. Here, you should select the plan based on your specific needs. Keep in mind that we are using a PaaS offering, which means that underlying compute resources like CPU and RAM are still being utilized. Depending on the expected workload, you can choose the most appropriate plan. Secondly—and very importantly—consider the features offered by each tier, such as redundancy, backup, autoscaling, custom domains, etc. In my case, I will use the Basic B1 plan. In the Database section, we do not select any option. Remember that this will depend on your specific requirements. In the Container section, under 'Image Source', select 'Other container registries'. For production environments, I recommend using Azure Container Registry (ACR) and pulling the n8n image from there. Now we will configure the Docker Hub options. This step is related to the previous one, as the available options vary depending on the image source. In our case, we will use the public n8n image from Docker Hub, so we select 'Public' and proceed to fill in the required fields: the first being the server, and the second the image name. This step is very important—use the exact same values to avoid issues. In the Networking section, we will select the values as shown in the image. This configuration will depend on your specific use case—particularly whether to enable Virtual Network (VNet) integration or not. VNet integration is typically used when the App Service needs to securely communicate with private resources (such as databases, APIs, or services) that reside within an Azure Virtual Network. Since this is a demo environment, we will leave the default settings without enabling VNet integration. In the 'Monitoring and Security' section, it is essential to enable these features to ensure traceability, observability, and additional security layers. This is considered a minimum requirement in production environments. At the very least, make sure to enable Application Insights by selecting 'Yes'. Finally, click on 'Create' and wait for the deployment process to complete. Now we will 'stop' our Web App, as we need to make some preliminary modifications. To do this, go to the main overview page of the Web App and click on 'Stop'. In the same Web App overview page, navigate through the left-hand panel to the 'Settings' section. Once there, click on it and select 'Environment Variables'. Environment variables are key-value pairs used to configure the behavior of your application without changing the source code. In the case of n8n, they are essential for defining authentication, webhook behavior, port configuration, timezone settings, and more. Environment variables within Azure specifically in Web Apps function the same way as they do outside of Azure. They allow you to configure your application's behavior without modifying the source code. In this case, we will add the following variables required for n8n to operate properly. Note: The variable APP_SERVICE_STORAGE should only be modified by setting it to true. Once the environment variables have been added, proceed to save them by clicking 'Apply' and confirming the changes. A confirmation dialog will appear to finalize the operation. Restart the Web App. This second startup may take longer than usual, typically around 5 to 7 minutes, as the environment initializes with the new configuration. Now, as we can see, the application has loaded successfully, and we can start using our own n8n server hosted on Azure. As you can observe, it references the host configured in the App Service. I hope you found this guide helpful and that it serves as a useful resource for deploying n8n on Azure App Service. If you have any questions or need further clarification, feel free to reach out—I'd be happy to help.232Views0likes3CommentsBilling and payment methods
I had been paying azure subscriptions with a credit card issued by a bank in Ghana. I want to add a new credit card issued by a bank in the US. Azure portal doesn't give me the option to change the billing country from Ghana to United States. Any suggestions on what to do?372Views0likes2CommentsRDP published app via RDS as RemoteApp windows shortcuts (ALT+TAB)
This is not a question but rather feature request. I'm not sure how up to this point no users complained about lack of functionality. Currently if remote terminal server client is published using RDS it will not accept windows key shortcuts (Alt+Tab, Win+R etc). This is a major lack of functionality especially for people who are using remote connection as a primary means to do their work. Similar solution developed by Citrix called Citrix Workspace doesn't have this issue. I'm sure it would be greatly appreciated by windows users who use remote connections and have mastered keyboard shortcuts for this feature to be added.1.7KViews0likes1CommentWindows App, pasting files hangs OS
Hi all, We’re experiencing an intermittent but frustrating issue when using the Windows App to connect to our Azure Virtual Desktop environment. Issue: When users attempt to copy and paste certain files from their local machine to the remote session, the operating system on the remote side hangs. The mouse still moves, and the clock continues to tick, but: -Start menu becomes unresponsive -Taskbar icons stop registering clicks -Desktop icons are frozen -No error messages appear This occurs sporadically and seems to affect files of varying sizes and types — 100KB up to 20MB. What we've tried: -Updating the Windows App to the latest version -Verifying clipboard redirection is enabled -Using MSTSC, and the Remote Desktop Store App. These work but don't support Session Pools, Remote -Apps, or SSO. -Using RemoteDesktop_1.2.6228.0 (MSI Install) Has the same issue as Windows App Environment: -Remote app hosted in Azure Virtual Desktop (AVD) -Users connecting from Windows 10/11 clients -Windows App version: 2.0.419.0, 2.0.420.0, and 2.0.500.0 All ideas welcome - is a major disruption to our business processes.740Views0likes24CommentsMicrosoft App Access Panel requires MFA but we didn't enable it
Hi. Recently we've received a report from a user that he was asked to perform MFA when he was signing in. After checking the sign-in logs, we've found that it was an application called "Microsoft App Access Panel" and the status of that sign-in attempt was "interrupted". The detail of the log tells us that the authentication policies applied was "App requires MFA", but we couldn't find that policy anywhere in Conditional Access. The only MFA-related policy in Conditional Access was a policy that will requires user to perform MFA only including "Office 365 Exchange Online" but since the policy is not related to "Microsoft App Access Panel"(?) and the said user was excluded from that policy, I don't think that's the issue. We have already set the "Enable Security defaults" to "No" and we've checked that the "Multi-factor Auth Status" for the user was "Disabled". Does anyone knows where in Azure could be possibly causing MFA? Thanks.44KViews5likes17CommentsHow to add custom GitHub Apps in Azure DevOps?
Hello! We have quite a large mono repo and more than 30-40 teams participate and we have more than 150+ pipelines that get triggered for team-specific commits. We use the default Azure pipelines GitHub App to connect the Azure DevOps with GitHub repos. Since it's a large request considerably other repos also utilize this app in our organization. We started getting the GitHub rate limit issue as the Azure Pipelines GitHub app gets about 15000 requests per hour at least twice or thrice a day. So, the GitHub support suggested creating a custom GitHub app and using some of the Azure pipelines to use the new app. We need some help on how to add the custom-created GitHub app in Azure DevOps. We can only see OAuth App integration for GitHub, but how to add GitHub Apps integration? If we use the OAuth App integration we get repos not available when the new custom app actually has access to the repos. Thanks, Sundar.647Views0likes1CommentWork Item Labels - Not working for multi-line text fields
For some reason, it seems that field Labels for work items are not working for me for multi-line text fields. For instance, I have a work item field named "Test Setup Steps," but would like it to be displayed as "Test Conditions" on a certain work item type. I have changed the Label on that work item type for that field to "Test Conditions," but "Test Setup Steps" is still displayed on the actual work items. Has anyone else run into this before? Labels seem to be working on other fields for that work item type.517Views0likes1CommentFailed to download or parse releases-index.json with error: {"code":"UNABLE_TO_GET_ISSUER_CERT_LOCAL
Hi All, I am facing issue while running azure devops pipeline for .Net Core application. In the pipeline I am using step "Use .NET Core sdk 6.0.x". At this step I am getting error mentioned in the subject. Earlier pipeline was running successful. I haven't modify anything in the pipeline. not sure what is happening. Any idea? Thanks in advance1.5KViews0likes1CommentHow to obtain a new partner integration sandbox license
Our current Partner intergration Sandbox License has expired, and i'm looking for a way to obtain a new partner intergration sandbox license within the same tenant . Can anybody help me with that. Been trying to get an answer from subscripton and sandbox support for couple of days now. But the keep sending met to different departments and still no answer.21Views1like1CommentAzure app service - diagnostics Rest API
Hello team, I have recently noticed the Azure app service rest API references for diagnostics, Is this referring to the diagnostics and solving problems in the portal? Or it is different? I am not able to find any related documents about how to use them!! Please share any official document about this diagnosis rest API references, https://learn.microsoft.com/en-us/rest/api/appservice/diagnostics Thanks505Views0likes2CommentsNetwork Design Ideas for VMs
I am analyzing the current Azure environment at my new job and trying to figure out the architectural choices mostly networking wise. Currently, we have 10 VMs and each VM has its own VNet and they are all in the same region. In my experience so far, I have never seen such network design in Azure before.4Views0likes0CommentsStorage Accounts - Networking
Hi All, Seems like a basic issue, however, I cannot seem to resolve the issue. In a nutshell, a number of storage accounts (and other resources) were created with the Public Network Access set as below: I would like to change them all to them all to Enabled from selected virtual networks and IP addresses or even Disabled. However, when I change to Enabled from selected virtual networks and IP addresses, connectivity from, for example, Power Bi to the Storage Account fails. I have added the VPN IP's my local IP etc. But all continue to fail connection or authentication. Once it is changed back to Enabled for All networks everything works, i.e. Power Bi can access the Azure Blob Storage and refresh successfully. I have also enabled 'Allow Azure services on the trusted services list to access this storage account'. But PBI fails to have access to the data. data Source Credentials error, whether using Key, Service Principal etc, it fails. As soon as I switch it back to Enable From All Networks, it authenticates straight away. One more idea I had was to add ALL of the Resource Instances, as this would white list more Azure services, although PBI should be covered by enabling 'Allow Azure services on the trusted services list to access this storage account'. I thought I might give it a try. Also, I created an NSG and used the ServiceTags file to create an inbound rule to allow Power BI from UK South. Also, I have created a Private Endpoint. This should all have worked but still can’t set it to restricted networks. I must be missing something fundamental or there is something fundamentally off with this tenant. When any of the two restrictive options are selected, do they also block various Microsoft services? Any help would be gratefully appreciated.5Views0likes0CommentsCost-effective alternatives to control table for processed files in Azure Synapse
Hello, good morning.In Azure Synapse Analytics, I want to have a control table for the files that have already been processed by the bronze or silver layers. For this, I wanted to create a dedicated pool, but I see that at the minimum performance level it charges 1.51 USD per hour (as I show in the image), so I wanted to know what other more economical alternatives I have, since I will need to do inserts and updates to this control table and with a serverless option this is not possible.38Views0likes1CommentNeed Method to Allow PIM Group Elevation Without Granting Full Access to Azure Portal
I’m currently managing Conditional Access policies in our tenant to enforce strict access to the Azure portal. Specifically, we've restricted access to azure unless the user is coming from an approved IP range. This is working as expected. However, we're using Privileged Identity Management (PIM) for just-in-time (JIT) group membership activation. I'd like users to be able to elevate themselves to security groups configured via PIM without needing full access to the Azure portal. My question is: Is there a way to allow users to activate PIM group assignments (JIT group membership or ownership) without providing full access to azure? Alternatively: Are there specific endpoints or app IDs that can be excluded in Conditional Access to allow only PIM group activation? Has anyone found a workaround, such as scripting or automating the group activation, that maintains a strong security posture? I’d appreciate any recommendations, insights, or proven solutions that let us support PIM group workflows without opening up full Azure portal access. Thanks in advance!34Views0likes1CommentCan onboard to MDE with stand-alone script, but gpo option does not work.
I'm sure this question has been asked before, but I have not been able to find anything. I can use the stand-alone script to get a workstation onboarded. But using the GPO option is not. If anyone has experience with this, please let me know of there are any nuances regarding getting the GPO option to work. I have followed the basic instructions for onboarding that way. Thanks36Views0likes3CommentsAnnouncing the winners of the June Innovation Challenge
We had an extraordinary set of projects created during our June 2025 Innovation Challenge Hackathon. It’s inspiring to see what happens when our community comes together to solve for real world problems! 231 participants worked on 49 projects, showing the amazing capabilities they’ve developed through their own learning and coming together to collaborate as professional AI developers. The Americas Azure team created this program to foster a diverse talent pool in partnership with a range of organizations supporting groups who are underrepresented in technology. For this event we had participants from Microsoft’s Black Partner Growth Initiative, Código Facilito, DIO, GenSpark, HACC, NASA Space Apps Chicago and Mountain View, Project Blue Mountain, Propel2Excel, TechBridge, and Women in Cloud. Hackers were judged based on the performance of their team’s solution, pushing the boundaries of innovation, leveraging the full capabilities of Azure, and for implementing Responsible AI. The winning projects solved for AI use cases that ranged from addressing the social impact of property inheritance issues to the technical challenges of working with unstructured data to managing wildfire risk. AI for heirs’ property: Many communities struggle with issues around heirs’ property and losing property because of overdue taxes, blight, and other challenges. In order to nurture opportunity in under-resourced communities, local governments and nonprofits work to identify potentially at-risk properties, offer programmatic services to address legal and other issues, and fund and deploy projects. For the hackathon, we looked for solutions to strengthen and speed up this complicated work. Handling unstructured data: As organizations experiment with techniques for augmenting Generative AI models, such as RAG, they can often see inconsistent results when trying to improve performance. A common issue is how they are chunking their data. We asked hackers to create a tool or process for handling unstructured data in a way that results in more consistent model performance. Wildfire resilience: Wildfires are growing in frequency and intensity, threatening ecosystems, infrastructure, and communities. These projects delivered solutions to help emergency responders, planners, and policymakers predict wildfire behavior, assess risks, and optimize response strategies—all in near real-time. Every team who delivered a project has a lot to be proud of! We saw some great thinking, real imagination and overall a community with a strong growth mindset—engineers ready to solve big problems. It’s also worth noting this is our first Innovation Challenge Hackathon where hackers created multi-agent solutions. The pace of AI is fast! Here are the judges' picks for the winning prizes. First place $10,000 InHeir.AI an intelligent and secure legal tech platform developed to streamline property dispute management and analysis legal professionals and local communities. The solution ensures data privacy, security and accuracy by adhering to responsible AI practices and responses are backed by established legal procedures. Second place $5,000 Contextual Jigsaw Addresses a critical vulnerability in modern AI systems: how document segmentation for RAG (Retrieval-Augmented Generation) systems can systematically distort information, amplify biases, and mask malicious content. This project identifies and solves for the convergence of four interconnected threats that emerge when splitting documents into chunks, creating a perfect storm that compromises AI fairness, security, and reliability. Heir Aid AI-powered platform designed to support legal professionals, nonprofits, and communities in resolving heirs’ property issues and protecting generational wealth. HeirAid streamlines the identification, analysis, and resolution of complex title and tax challenges, particularly in under-resourced neighborhoods. Third place $2,500 Artemis Shield Wildfire Dashboard an AI-powered, cloud-native command and control platform designed to transform wildfire management from a reactive struggle into a proactive, data-driven strategy. Ignis Sentinels a wildfire resilience platform powered by multi-agent AI system in near real-time. It helps emergency responders, planners, and policymakers predict fire behavior, assess threats to people and infrastructure, and coordinate smarter evacuations. Ignis Map AI-powered wildfire prediction and response through real-time geospatial intelligence. We look forward to our next Innovation Challenge hackathon later this year! There will be new tools and ideas to put to the test, and can’t wait to see what our community builds!315Views3likes0CommentsProtecting Oracle Keys with Azure Key Vault
Has anyone used Azure Key Vault to protect keys for on-premises Oracle databases? From what I can see, it isn't a direct integration but rather using Oracle Key Vault for the key management and then integrating OKV with Azure Key Vault as the HSM. Has anyone done this, and is it a supported configuration?29Views0likes2CommentsLearning Azure with Ofek – This time: Azure Monitor ☁️
Anyone working with any cloud platform understands that monitoring is no longer a luxury – it’s a vital part of every cloud architecture. Azure Monitor is one of the most powerful tools in the cloud and especially within Azure – acting as a unified monitoring platform across environments (Multi-Cloud & On-Premises). So what does Azure Monitor actually do? Monitors everything from infrastructure level to application level. Collects performance data, errors, failures – and allows you to visualize it via charts, dashboards, and structured reports. Through Azure Alerts – it gives us the ability to trigger actions: “If X happens (e.g., someone shuts down a VM), then do Y (e.g., send me an email alert).” And of course, Action Groups – enabling automatic response or automation, whether it’s a security reaction or auto-scaling. It’s important to understand the basics – we deal with two main types of data: Metrics and Logs. 📊 Metrics – Performance charts over time (like CPU, RAM, Memory). For example, in professional terms, if I see a spike – let’s say I'm opening ticket sales for a Noa Kirel concert in Yarkon Park – I’d need to scale my servers to handle the load and avoid risking my production environment. 📋 Logs – Tables of operational and diagnostic logs, typically collected automatically from Azure-deployed resources. You can filter, extract exactly what you need by resource/user/date, export to CSV, and analyze using KQL. And with this tool – we can even integrate with Power BI, build dashboards, and truly understand what’s going on in our cloud. 🔗 Supporting documentation: https://lnkd.in/dC_SfGk7 Lastly, I want to thank you – you give me the energy to keep documenting, recording, writing reviews and articles, and most importantly, being there with you in the community.24Views0likes1CommentAzure DevOps Server API - Create Pipeline is Non functional
Hi, The azure devops server API allows creation of a Pipeline: https://learn.microsoft.com/en-us/rest/api/azure/devops/pipelines/pipelines/create?view=azure-devops-server-rest-7.0#pipeline If you follow this link, you likely want to start from an existing pipeline object, so I did that. Easy enough, query a pipeline by ID, and then look to call the one above. You'll hit an error saying you need to specify a repoName, that it's null. Ok, not sure why, it's got an ID, but that works. This probably should just be added to the documentation. Not a huge deal. $pipelineObj.configuration.repository | Add-Member -NotePropertyName name -NotePropertyValue $reposName The real issue is my need is that i want to back up my pipeline from one server to the other. I can easily update the repository ID to match the target server, no problem, that all works for a regular pipeline. But I have some pipelines that don't have their YAML files on the default branch (typically named master). For those, every time i try to create it on the destination, I get the following error (redacted): Invoke-RestMethod : {"$id":"1","innerException":null,"message":"File /<FILENAME> not found in repository <SERVER NAME>/_git/<REPO NAME> branch refs/heads/master version <COMMIT ID>.","typeName":"Microsoft.TeamFoundation.DistributedTa sk.Pipelines.YamlFileNotFoundException, Microsoft.TeamFoundation.DistributedTask.Pipelines.Yaml","typeKey":"YamlFileNot FoundException","errorCode":0,"eventId":3000} There is no conceivable way to inform the creation statement that I know this to be the case, and i want to check a specific branch. There's a pipeline default branch functionality -> you can set that, but it has no effect. It's still checking the repository default branch and failing if the file doesn't exist: $pipelineObj.configuration.repository | Add-Member -NotePropertyName defaultBranch -NotePropertyValue $defaultBranch Any idea why this restriction is there and how to bypass it? thanks, Matt629Views0likes2CommentsLatest Remote Desktop client not starting on W10 LTSB 1607
Hello. We're using about 50 thin-clients to connect to our AVD systems. The HP t530 thin-clients are running Windows 10 Enterprise 2016 LTSB / ver 1607 / build 14393 / RS1 and have the Remote Desktop client for Windows (msi) installed, with auto update enabled. This all worked fine until the most recent releases of the Remote Desktop client. Starting with version 1.2.6186 the app simply doesn't start anymore. There's no gui, no errors, nothing. Just for a few seconds the spinning blue circle next to the cursor. It does create msrdcw_xxx.etl files in %temp%\DiagOutputDir\RdClientAutoTrace, however i couldn't locate any obvious errors in it. Upgrading to 1.2.6187 does fixes the problem temporary (but that's probably because it's actually based on the older 1.2.6081 version). Upgrading to the latest releases 1.2.6188, 1.2.6227 and 1.2.6228 reintroduces the problem. It's not something in the thin-client image, also did a fresh Win10 1607 install with latest Windows Updates, same problem. Is there anything i can do or try, except blocking all future updates and staying on 1.2.6187?386Views0likes7Comments
Events
Recent Blogs
- Encryption in Transit (EiT) overview As organizations increasingly move to cloud environments, safeguarding data security both at rest and during transit is essential for protecting sensitive info...Jun 30, 2025115Views1like0Comments
- 2 MIN READWe’re excited to announce the General Availability of workload orchestration, a new Azure Arc capability that simplifies how enterprises deploy and manage Kubernetes-based applications acros...Jun 30, 2025109Views0likes0Comments