Automation & Control
83 TopicsAzure automation feature, improvements and bugs
This is by no means meant as critic as i love the Azure Automation Account product and its current features but these are thing that i would love to see as an offering/fixed for the future. Source Control (I can only speak for Github as that is what i use): Bugs: Tags being overwritten / removed by source controll both on full sync but also on incremential syncs (Already reported in case #2508010040002105) Features: Runbooks in source control is not being deleted in automation account when they have been deleted in source control. Support for diffrent sync types other than PowerShell 5.1 (Personally we will not consider upgrading to a newer version before there is source control implemented) Support for syncing the full repository instead of only a specific folder. So recursive source control for easier organisation in repositories I know we can setup multiple source control in azure automation but that seems a bit redundant and more maintance as the source control integration expires after 1 year does not matter if your PAT token is set to never expires Add support for syncing synopsis / description for at least PowerShell scripts so it grabs it directly from the given script and inputs it into the description field. Just the output of get-help .\ScriptName.ps1 Logging: Bugs: From time to time we see that logs is being displayed twice after each other so lets say you get the first result of logs. For this example lets say the first 10 entries in the All log page and scroll down further then the same 10 entries are repeated again and again and again this can also be seen by the time stamp of the log entry. (No new network requests for logs is being made so i believe this might be a bug in a javascript without being 100% certain) The most often time we see this bug is when a runbook is still running so it might be the log output stream that messes this up. And just to provide a picture for refrence without exposing anything sensitive the bug can be seen based on timestamps here: PowerShell 7 and above log outputs seems to contain some non escaped ASCI characters which makes the logs harder to read and also makes a log object being split into multiple log entries in Azure automation Log outputs Seems to have been fixed since i last tested Features: Searching for a specific job id in the general job list. Currently there is a work arround by going into a specific runbook - go to jobs - Press "Find job" and then you can lookup a jobid globally but the UI is not being updated correctly as displayed here: Would love to see a button here or be able to search for a jobid Formatting log outputs so you can do multi line output in a single log output entry E.G. "Write-output "New´r´nLine" So the output entry contains multiple lines for easier human readable log outputs Runbook page: Bugs: Searching for runbook names seems a bit buggy as far as i have seen there is 3 diffrent results for the end user Base image intialy looking at all runbooks One option is that it is not able to find a runbook with that name I have not been able to replicate it to get a picture of it. Another is that it displays a list of runbooks none of which matches what you searched for Third is that when you have searched for something and remove your search it does not return the original view Features: Ability to go to a previous job and re-run it/restart it with the same parameters. Think a bit like the way you can restart a github action run Scheduling: Features: More of a feature request but adding the schedule for a runbook directly in the code is awesome. (This is something we currently do by adding a parameter that contains the scheduling information then we have a runbook going over all our runbooks every hour and looking for this parameter and then constructing a schedule if it does not exist and links the runbook to the schedule and finally we also add a tag mentioning If the schedule name is enabled or not (*back to the issue in source control removing the tag*)) Hybrid workers: Features: I personally would love the ability to pause a hybrid worker in a hybrid worker group - Why? - Well we currently have 4 hybrid workers all running windows and have monthly patch windows and if a job hits a hybrid worker that is in patch then the jobs would go into a suspended state and not be picked up again Now we could remove the hybrid worker from the group but that would also remove the extension which would be reinstalled when added and then we would hit this https://learn.microsoft.com/en-us/azure/automation/troubleshoot/extension-based-hybrid-runbook-worker#scenario-runbooks-go-into-a-suspended-state-on-a-hybrid-runbook-worker-when-using-a-custom-account-on-a-server-with-user-account-control-uac-enabled This is an issue we originally started experiencing when we migrated from agent-based hybrid workers to extension based due to the discontinuation of agent-based. Another great reason is when needing to troubleshoot something on a specific hybrid worker or even when needing to update modules on a specific hybrid worker as this can not be done while the hybrid worker is still running jobs unless you use force or hit a time that it is not running or by manually stopping the service and then again end up with suspended jobs that is not being picked up again. Additional features that i personally would love to see as an offering: A front end for azure automation for end users (Think self-service portal) as some kind of add-on feature allowing a specific group of people to start a given runbook but supplying a more user friendly front end for it while also including some more limitations for end user groupings. I know there is already third party solutions for this and tbh I almost created one my self on my last maternity leave but my company chose not to pursue it further as the statement is we have 1 self service platform being servicenow can be viewed https://github.com/Mynster9361/Self-Service-Frontend-Azure-Automation just to give some inspiration if needed RBAC permissions for individual runbooks (as far as i remember this can already be done through cli) A General overview management blade for managing webhooks and the associated runbooks Currently there is no way to know which runbooks has an active / inactive webhook assigned to them as the only way to see this is by going to a runbook go to the webhooks blade and look if there is one or not. Personally i would love to see a blade on the general overview called "Webhooks" that looks similar to this table maybe: RunbookNameExpirationLast triggeredStatusRunbook1 (Clickable to get directly to the runbook)Custom_name_for_this webhook02/01/2022 16:00 EnabledRunbook2webhook211/11/2026 16:00TodayDisabledRunbook3webhook311/11/2027 16:00TodayEnabled Instead of webhook being a gentleman agreemnet on when you can enable and when you shouldn't enable and naming and such you have 1 general overview of all webhooks which would give value in regards to security and easier management of webhooks The things i see as most critical or highest on my wish list: To list 2 things i would like to see sooner rather than later Source control definitely needs to be updated/revamped so it both supports other languages/versions and also does not remove tags. Another thing that would be nice to have is to force it to follow source control so if i delete something that is in source control it is also deleted in azure automation Hybrid workers in maintenance mode so it completes running jobs and you are able to work on the hybrid worker whether it be bugs or just regular updates.26Views2likes0CommentsHow to deploy n8n on Azure App Service and leverage the benefits provided by Azure.
Lately, n8n has been gaining serious traction in the automation world—and it’s easy to see why. With its open-source core, visual workflow builder, and endless integration capabilities, it has become a favorite for developers and tech teams looking to automate processes without being locked into a single vendor. Given all the buzz, I thought it would be the perfect time to share a practical way to run n8n on Microsoft Azure using App Service. Why? Because Azure offers a solid, scalable, and secure platform that makes deployment easy, while still giving you full control over your container and configurations. Whether you're building a quick demo or setting up a production-ready instance, Azure App Service brings a lot of advantages to the table—like simplified scaling, integrated monitoring, built-in security features, and seamless CI/CD support. In this post, I’ll walk you through how to get your own n8n instance up and running on Azure—from creating the resource group to setting up environment variables and deploying the container. If you're into low-code automation and cloud-native solutions, this is a great way to combine both worlds. The first step is to create our Resource Group (RG); in my case, I will name it "n8n-rg". Now we proceed to create the App Service. At this point, it's important to select the appropriate configuration depending on your needs—for example, whether or not you want to include a database. If you choose to include one, Azure will handle the connections for you, and you can select from various types. In my case, I will proceed without a database. Proceed to configure the instance details. First, select the instance name, the 'Publish' option, and the 'Operating System'. In this case, it is important to choose 'Publish: Container', set the operating system to Linux, and most importantly select the region closest to you or your clients. Service Plan configuration. Here, you should select the plan based on your specific needs. Keep in mind that we are using a PaaS offering, which means that underlying compute resources like CPU and RAM are still being utilized. Depending on the expected workload, you can choose the most appropriate plan. Secondly—and very importantly—consider the features offered by each tier, such as redundancy, backup, autoscaling, custom domains, etc. In my case, I will use the Basic B1 plan. In the Database section, we do not select any option. Remember that this will depend on your specific requirements. In the Container section, under 'Image Source', select 'Other container registries'. For production environments, I recommend using Azure Container Registry (ACR) and pulling the n8n image from there. Now we will configure the Docker Hub options. This step is related to the previous one, as the available options vary depending on the image source. In our case, we will use the public n8n image from Docker Hub, so we select 'Public' and proceed to fill in the required fields: the first being the server, and the second the image name. This step is very important—use the exact same values to avoid issues. In the Networking section, we will select the values as shown in the image. This configuration will depend on your specific use case—particularly whether to enable Virtual Network (VNet) integration or not. VNet integration is typically used when the App Service needs to securely communicate with private resources (such as databases, APIs, or services) that reside within an Azure Virtual Network. Since this is a demo environment, we will leave the default settings without enabling VNet integration. In the 'Monitoring and Security' section, it is essential to enable these features to ensure traceability, observability, and additional security layers. This is considered a minimum requirement in production environments. At the very least, make sure to enable Application Insights by selecting 'Yes'. Finally, click on 'Create' and wait for the deployment process to complete. Now we will 'stop' our Web App, as we need to make some preliminary modifications. To do this, go to the main overview page of the Web App and click on 'Stop'. In the same Web App overview page, navigate through the left-hand panel to the 'Settings' section. Once there, click on it and select 'Environment Variables'. Environment variables are key-value pairs used to configure the behavior of your application without changing the source code. In the case of n8n, they are essential for defining authentication, webhook behavior, port configuration, timezone settings, and more. Environment variables within Azure specifically in Web Apps function the same way as they do outside of Azure. They allow you to configure your application's behavior without modifying the source code. In this case, we will add the following variables required for n8n to operate properly. Note: The variable APP_SERVICE_STORAGE should only be modified by setting it to true. Once the environment variables have been added, proceed to save them by clicking 'Apply' and confirming the changes. A confirmation dialog will appear to finalize the operation. Restart the Web App. This second startup may take longer than usual, typically around 5 to 7 minutes, as the environment initializes with the new configuration. Now, as we can see, the application has loaded successfully, and we can start using our own n8n server hosted on Azure. As you can observe, it references the host configured in the App Service. I hope you found this guide helpful and that it serves as a useful resource for deploying n8n on Azure App Service. If you have any questions or need further clarification, feel free to reach out—I'd be happy to help.3.6KViews4likes8CommentsCreating and Using an Azure Automation Custom Runtime Environment
A custom runtime environment is a way of defining a specific job execution environment for Azure Automation runbooks, including Microsoft Graph PowerShell SDK runbooks. In this article, we create a new environment for PowerShell V7.4, load in some SDK modules, switch a runbook from a system-generated environment, and run some code. https://office365itpros.com/2025/08/29/custom-runtime-environment/28Views0likes0CommentsScaling Smart with Azure: Architecture That Works
Hi Tech Community! I’m Zainab, currently based in Abu Dhabi and serving as Vice President of Finance & HR at Hoddz Trends LLC a global tech solutions company headquartered in Arkansas, USA. While I lead on strategy, people, and financials, I also roll up my sleeves when it comes to tech innovation. In this discussion, I want to explore the real-world challenges of scaling systems with Microsoft Azure. From choosing the right architecture to optimizing performance and cost, I’ll be sharing insights drawn from experience and I’d love to hear yours too. Whether you're building from scratch, migrating legacy systems, or refining deployments, let’s talk about what actually works.73Views0likes1CommentError Running Script in Runbook with System Assigned Managed Identity
Hello everyone, I could use some assistance, please. I'm encountering an error when trying to run a script within a runbook. I'm using PowerShell 5.1 with a system-assigned managed identity. The script works find without using the managed identiy via powershell outside of azure. Error: System.Management.Automation.ParameterBindingException: Cannot process command because of one or more missing mandatory parameters: Credential. at System.Management.Automation.CmdletParameterBinderController.PromptForMissingMandatoryParameters(Collection1 fieldDescriptionList, Collection1 missingMandatoryParameters) at System.Management.Automation.CmdletParameterBinderController.HandleUnboundMandatoryParameters I am using this script Connect-ExchangeOnline -ManagedIdentity -Organization domain removed for privacy reasons # Specify the user's mailbox identity $mailboxIdentity = "email address removed for privacy reasons" # Get mailbox configuration and statistics for the specified mailbox $mailboxConfig = Get-Mailbox -Identity $mailboxIdentity $mailboxStats = Get-MailboxStatistics -Identity $mailboxIdentity # Check if TotalItemSize and ProhibitSendQuota are not null and extract the sizes if ($mailboxStats.TotalItemSize -and $mailboxConfig.ProhibitSendQuota) { $totalSizeBytes = $mailboxStats.TotalItemSize.Value.ToString().Split("(")[1].Split(" ")[0].Replace(",", "") -as [double] $prohibitQuotaBytes = $mailboxConfig.ProhibitSendQuota.ToString().Split("(")[1].Split(" ")[0].Replace(",", "") -as [double] # Convert sizes from bytes to gigabytes $totalMailboxSize = $totalSizeBytes / 1GB $mailboxWarningQuota = $prohibitQuotaBytes / 1GB # Check if the mailbox size exceeds 90% of the warning quota if ($totalMailboxSize -ge ($mailboxWarningQuota * 0.0)) { # Send an email notification $emailBody = "The mailbox $($mailboxIdentity) has reached $($totalMailboxSize) GB, which exceeds 90% of the warning quota." Send-MailMessage -To "email address removed for privacy reasons" -From "email address removed for privacy reasons" -Subject "Mailbox Size Warning" -Body $emailBody -SmtpServer "smtp.office365.com" -Port 587 -UseSsl -Credential (Get-Credential) } } else { Write-Host "The required values(TotalItemSize or ProhibitSendQuota) are not available." }599Views0likes1CommentAzure DevOps REST API - tag DeploymentGroups' target
Hello everyone, I am trying to setup a function in PowerShell to be able to set tags on specific targets of a deploymentgroup, and for that I am using this documentation page: https://learn.microsoft.com/en-us/rest/api/azure/devops/distributedtask/targets/update?view=azure-devops-rest-7.0&tabs=HTTP#request-body I created the request body as described in the page like bellow: { "id": 541, "tags": [ "tag1-backendWithDb", "tag1-backendWithDb-active-node", "tag2-backendWithDb-database", "tag2-backendWithDb", "tag2-backendWithDb-active-node", "tag3-blazor", "tag3-blazor-active-node", "tag4-yarp", "tag4-yarp-active-node" ] } Than I do the following command : Invoke-RestMethod -Method Patch -Uri "$baseurl/distributedtask/deploymentgroups/$($DGid)/targets?api-version=6.0-preview.1" -Credential $cred -Body ($body | ConvertTo-Json) -ContentType 'Application/json' But then I get an error like this : Invoke-RestMethod: { "$id": "1", "innerException": null, "message": "Value cannot be null.\r\nParameter name: machinesToUpdate", "typeName": "System.ArgumentNullException, mscorlib", "typeKey": "ArgumentNullException", "errorCode": 0, "eventId": 0 } The problem is that the document is not specifying any parameter named 'machinesToUpdate'. What is it that I am missing here?Solved135Views0likes3CommentsComparision on Azure Cloud Sync and Traditional Entra connect Sync.
Introduction In the evolving landscape of identity management, organizations face a critical decision when integrating their on-premises Active Directory (AD) with Microsoft Entra ID (formerly Azure AD). Two primary tools are available for this synchronization: Traditional Entra Connect Sync (formerly Azure AD Connect) Azure Cloud Sync While both serve the same fundamental purpose, bridging on-prem AD with cloud identity, they differ significantly in architecture, capabilities, and ideal use cases. Architecture & Setup Entra Connect Sync is a heavyweight solution. It installs a full synchronization engine on a Windows Server, often backed by SQL Server. This setup gives administrators deep control over sync rules, attribute flows, and filtering. Azure Cloud Sync, on the other hand, is lightweight. It uses a cloud-managed agent installed on-premises, removing the need for SQL Server or complex infrastructure. The agent communicates with Microsoft Entra ID, and most configurations are handled in the cloud portal. For organizations with complex hybrid setups (e.g., Exchange hybrid, device management), is Cloud Sync too limited?446Views1like2CommentsAzure Form Recognizer Redaction Issue with Scanned PDFs and Page Size Variations
Hi all, I’m working on a PDF redaction process using Azure Form Recognizer and Azure Functions. The flow works well in most cases — I extract the text and bounding box coordinates and apply redaction based on that. However, I’m facing an issue with scanned PDFs or PDFs with slightly different page sizes. In these cases, the redaction boxes don’t align properly — they either miss the text or appear slightly off (above or below the intended area). It seems like the coordinate mapping doesn't match accurately when the document isn't a standard A4 size or has DPI inconsistencies. Has anyone else encountered this? Any suggestions on: Adjusting for page size or DPI dynamically? Mapping normalized coordinates correctly for scanned PDFs? Appreciate any help or suggestions!79Views0likes1Comment"Authorization failed" error for Logic app writing a comment to Sentinel Incident
I have created a managed identity named id-sentinel-playbook that is used in 2 logic apps. Both the logic apps retrieve information from different external apis and writes the results as comments into the Sentinel incident. The managed identity id-sentinel-playbook has been assigned 2 roles - Microsoft Sentinel Responder and Microsoft Sentinel Automation Contributor role (See screenshot). However when one of the logic apps transacts with Sentinel such as checking the watchlist or writing comment into a Sentinel incident, there is the 403 forbidden error (See screenshot). It works fine when I use my Azure account as connection for the logic app. The other logic app also works fine when the same managed identity id-sentinel-playbook is used as connection to Sentinel. I have compared the identity of both the logic apps and they are the same. I have also searched online for existing answers and all point to the managed identity having insufficient roles, however id-sentinel-playbook already has the Microsoft Sentinel Responder role and strangely the other logic app that writes comments into the Sentinel incident as well, works. Here is the screenshot of the logic app having the user managed identity. The other logic app has the same. Please help. I spent 2 days investigating this and have no more ideas on how to further investigate this😓.196Views0likes1CommentAzure network security perimeter with storage accounts and Runbooks
I know this is a preview feature, and I don't know if it will be fixed in the future. The problem arises when you try to secure traffic between Azure serverless runbooks and a storage account. No matter what configuration you use, the runbook will access the storage account using a 10.x.x.x IP. That means you can't secure traffic using storage account firewall rules since private IPs are not allowed. I thought that with Azure's network security perimeter, this would be fixed since you can put your storage inside and specify that only resources from the subscription are allowed to access it. But no, it still doesn't work. Is Microsoft aware of this issue? I know you can use hybrid workers to get a public IP and so on, but that destroys the power of runbooks if you can't use the serverless option. Thanks for your time!103Views0likes1Comment