Recent Discussions
Azurepipeline Extension- doesn't give any error nor able to show dynamic dropdown value
I have created an extension, pushed to Marketplace & then used it in my org all this went so smooth. then when I started building Pipe line at TASK step when I choose my extension it is populating a field that has options pre-defined. but when it comes to dynamic it says "Not Found" aka empty. Details:- Custom step has 3 fields. Category- Cars [ pre defined option list ] Color - Blue [ pre defined option list ] Car List - this used endpoint - https://gist.githubusercontent.com/Satyabsai/b3970e2c3d229de2c70f1def3007ccfc/raw/02dc6f7979a83287adcb6eeecddb5575bef3516e/data.json ******************** TASK.JSON file**************************** { "id": "d9bafed3-2b89-4a4e-89b8-21a3d8a4f1d3", "name": "TestExecutor", "friendlyName": "Execute ", "description": "Executes", "helpMarkDown": "", "category": "Test", "author": "s4legen", "version": { "Major": 5, "Minor": 4, "Patch": 0 }, "instanceNameFormat": "Execute Test Suite $(carlist)", "inputs": [ { "name": "category", "type": "pickList", "defaultValue": "Web", "label": "Category", "required": true, "helpMarkDown": "Select the ", "options": { "mobile": "car", "web": "truck", "api": "Plan" } }, { "name": "Color", "type": "pickList", "defaultValue": "Blue", "label": "color", "required": true, "helpMarkDown": "Select the ", "options": { "nonProd": "Blue", "prod": "Red" } }, { "name": "Carlist", "type": "pickList", "defaultValue" :"BMWX7", "label": "Carlist", "required": true, "helpMarkDown": "Select the list to execute", "properties": { "EditableOptions": "true", "Selectable": "true", "Id": "CarInput" }, "loadOptions": { "endpointUrl": "https://gist.githubusercontent.com/Satyabsai/b3970e2c3d229de2c70f1def3007ccfc/raw/02dc6f7979a83287adcb6eeecddb5575bef3516e/data.json", "resultSelector": "jsonpath:$[*]", "itemPattern": "{ \"value\": \"{value}\", \"displayValue\": \"{displayValue}\" }" } } ], "execution": { "Node16": { "target": "index.js" } }, "messages": { "TestSuiteLoadFailed": "Failed to load test from endpoint. Using default options." } } ************** ************************* const tl = require('azure-pipelines-task-lib/task'); const axios = require('axios'); const TEST_ENDPOINT = 'https://gist.githubusercontent.com/Satyabsai/b3970e2c3d229de2c70f1def3007ccfc/raw/02dc6f7979a83287adcb6eeecddb5575bef3516e/data.json'; async function getValue(field) { if (field === 'Carlist') { try { const response = await axios.get(TEST_ENDPOINT, { timeout: 5000 }); return { options: response.data.map(item => ({ value: item.value, displayValue: item.displayValue })), properties: { "id": "CarlistDropdown" } }; } catch (error) { tl.warning(tl.loc('TestLoadFailed')); } } return null; } async function run() { try { const color = tl.getInput('color', true); const category = tl.getInput('category', true); const carlist = tl.getInput('Carlist', true); const result = await axios.post(tl.getInput('clicQaEndpoint'), { testSuite, category, environment }, { timeout: 10000 }); tl.setResult(tl.TaskResult.Succeeded, `Execution ID: ${result.data.executionId}`); } catch (err) { tl.setResult(tl.TaskResult.Failed, err.message); } } module.exports = { run, getValue }; ******************** CAN SOMEONE TELL ME WHAT JSON RESPONSE IS ACCPETABLE BY AZURE TO POPULATE DROPDOWN DYNAMICALLY SOURCE IS api19Views0likes1CommentIn Azure Devops, How to view all child work items for a dependent feature ?
I have a Epic with few features. In one of the feature, I have a user story that has a related link to other features. Is it possible to see all the features. its child user stories, tasks, bugs that are open for all the associated features ? Epic -> Feature 1 -> User stories -> Tasks, Bugs Epic -> Feature 2 -> User stories -> Tasks, Bugs Epic -> Feature 3 -> User stories -> Tasks, Bugs Epic -> Feature 4 -> Special User story (with relation to Feature 1, 2) -> Tasks, Bugs In the view I want to see all the features (that are associated to Special User story) and its childs Epic -> Feature 1 -> User stories, Tasks, Bugs Epic -> Feature 2 -> User stories, Tasks, Bugs30Views0likes1CommentHow to archive diagnostic logs sent to storage account
I need help understanding storage append blobs created by diagnostic settings. When a diagnostic setting is configured to log to a storage account, the logs are created as append blobs. I have compliance requirements that mean I need to retain these blobs in immutable storage for 6 years, however, it seems I cannot use the blob lifecycle management feature to change the access tier of the append blobs to "archive tier". It is only supported for block blobs. This page states "Setting the access tier is only allowed on Block Blobs. They are not supported for Append and Page Blobs." https://learn.microsoft.com/en-au/azure/storage/blobs/access-tiers-overview I feel like the lifecycle management feature is often touted as the answer to how to change the access tier for long-term storage scenarios, but it seems that it does not even work with diagnostic logs, which is pretty baffling. How does Microsoft recommend changing diagnostic logs in a storage account to archive tier storage? The only answer I can see would be to implement some an azure function or logic app to read each blob as it's written and write it back to another storage account as a block blob. But the, how do you when the new file has finished being written to. Nevermind the fact that this violates my immutability requirement.55Views0likes3CommentsAzure stack HCI VM resourec under another subscription
Hello, I created a hci VM under another subscription (using script, the GUI not support this). The VM and related resources under this subscription in a separated resource group. Unfortunately the details of machine are not shown. This VM is not listed under the HCI cluster\Virtual Machines or Azure VMs. (Only see under resources in RG) Do you have any idea what is the problem? As I read this is a supported configuration. (or not?) Thx119Views0likes1CommentAuthentication deadlock
I got a Microsoft 365 developer account and a sandbox as well. Many months ago it asked me to configure 2FA which I did using Microsoft Authenticator app on Android. I also had other 2FA setup on the same device for some work accounts. Later, somehow my sandbox account got deleted or overwritten from the authenticator app on my phone. I haven't been able to login to my Office 365 sandbox ever since. Ever flow I try asks me to use the authenticator on application. But the problem is that access to authenticator for 2FA was lost due to an app error. Our company's IT department said they can't do anything about it. I tried to delete the profile but when I created it Microsoft gave back the same sandbox which was already not working. My employer spends a good deal of money on Microsoft and it's very upsetting to get such a treatment from Microsoft. This account is needed for my office work. Help appreciated.150Views0likes6CommentsHow to deploy n8n on Azure App Service and leverage the benefits provided by Azure.
Lately, n8n has been gaining serious traction in the automation world—and it’s easy to see why. With its open-source core, visual workflow builder, and endless integration capabilities, it has become a favorite for developers and tech teams looking to automate processes without being locked into a single vendor. Given all the buzz, I thought it would be the perfect time to share a practical way to run n8n on Microsoft Azure using App Service. Why? Because Azure offers a solid, scalable, and secure platform that makes deployment easy, while still giving you full control over your container and configurations. Whether you're building a quick demo or setting up a production-ready instance, Azure App Service brings a lot of advantages to the table—like simplified scaling, integrated monitoring, built-in security features, and seamless CI/CD support. In this post, I’ll walk you through how to get your own n8n instance up and running on Azure—from creating the resource group to setting up environment variables and deploying the container. If you're into low-code automation and cloud-native solutions, this is a great way to combine both worlds. The first step is to create our Resource Group (RG); in my case, I will name it "n8n-rg". Now we proceed to create the App Service. At this point, it's important to select the appropriate configuration depending on your needs—for example, whether or not you want to include a database. If you choose to include one, Azure will handle the connections for you, and you can select from various types. In my case, I will proceed without a database. Proceed to configure the instance details. First, select the instance name, the 'Publish' option, and the 'Operating System'. In this case, it is important to choose 'Publish: Container', set the operating system to Linux, and most importantly select the region closest to you or your clients. Service Plan configuration. Here, you should select the plan based on your specific needs. Keep in mind that we are using a PaaS offering, which means that underlying compute resources like CPU and RAM are still being utilized. Depending on the expected workload, you can choose the most appropriate plan. Secondly—and very importantly—consider the features offered by each tier, such as redundancy, backup, autoscaling, custom domains, etc. In my case, I will use the Basic B1 plan. In the Database section, we do not select any option. Remember that this will depend on your specific requirements. In the Container section, under 'Image Source', select 'Other container registries'. For production environments, I recommend using Azure Container Registry (ACR) and pulling the n8n image from there. Now we will configure the Docker Hub options. This step is related to the previous one, as the available options vary depending on the image source. In our case, we will use the public n8n image from Docker Hub, so we select 'Public' and proceed to fill in the required fields: the first being the server, and the second the image name. This step is very important—use the exact same values to avoid issues. In the Networking section, we will select the values as shown in the image. This configuration will depend on your specific use case—particularly whether to enable Virtual Network (VNet) integration or not. VNet integration is typically used when the App Service needs to securely communicate with private resources (such as databases, APIs, or services) that reside within an Azure Virtual Network. Since this is a demo environment, we will leave the default settings without enabling VNet integration. In the 'Monitoring and Security' section, it is essential to enable these features to ensure traceability, observability, and additional security layers. This is considered a minimum requirement in production environments. At the very least, make sure to enable Application Insights by selecting 'Yes'. Finally, click on 'Create' and wait for the deployment process to complete. Now we will 'stop' our Web App, as we need to make some preliminary modifications. To do this, go to the main overview page of the Web App and click on 'Stop'. In the same Web App overview page, navigate through the left-hand panel to the 'Settings' section. Once there, click on it and select 'Environment Variables'. Environment variables are key-value pairs used to configure the behavior of your application without changing the source code. In the case of n8n, they are essential for defining authentication, webhook behavior, port configuration, timezone settings, and more. Environment variables within Azure specifically in Web Apps function the same way as they do outside of Azure. They allow you to configure your application's behavior without modifying the source code. In this case, we will add the following variables required for n8n to operate properly. Note: The variable APP_SERVICE_STORAGE should only be modified by setting it to true. Once the environment variables have been added, proceed to save them by clicking 'Apply' and confirming the changes. A confirmation dialog will appear to finalize the operation. Restart the Web App. This second startup may take longer than usual, typically around 5 to 7 minutes, as the environment initializes with the new configuration. Now, as we can see, the application has loaded successfully, and we can start using our own n8n server hosted on Azure. As you can observe, it references the host configured in the App Service. I hope you found this guide helpful and that it serves as a useful resource for deploying n8n on Azure App Service. If you have any questions or need further clarification, feel free to reach out—I'd be happy to help.Azure User Expresses Concern
A customer opened ticket SR#2407190040010082 as their consumption sku APIM service was stuck updating: Now that the service has exited that "updating" status I am able to resume working with it. The concern I want to share with you is my concern with how the system responds to a certificate error and gets stuck in that "updating" state. We know that network and login activities can fail on occasion. When APIM responds by getting stuck in that state it cannot be updated and it cannot be deleted and recreated. This issue lasted for a day before APIM eventually emerged from that state for reasons I am unaware. I was powerless and had to keep going back to check. Yes, this case is resolved but I hope that this feedback can be shared with the team in the hopes that a fix or enhancement to better handle this situation can be implemented.296Views5likes1CommentSSL certificate problem while doing GIT pull from Azure Devops Repos
We are using a proxy server that does SSL inspection of traffic and thus replaces the cert with the one that it issues in the process. That cert is issued by the cert authority on the proxy itself. This is fairly common with modern proxies. But users are getting following error while doing Git pull:- "git pull fatal: unable to access 'https://ausgov.visualstudio.com/Project/_git/Repo': SSL Certificate problem: self-signed certificate in certificate chain" Do I need to import the proxy CA issuing cert in Devops portal somewhere to resolve this or does the SSL inspection needs to be removed? Has anybody got it to work with proxy inspection still turned on?22Views0likes1CommentSSL certificate problem while doing GIT pull from Azure Devops Repos
We are using a proxy server that does SSL inspection of traffic and thus replaces the cert with the one that it issues in the process. That cert is issued by the cert authority on the proxy itself. This is fairly common with modern proxies. But users are getting following error while doing Git pull:- "git pull fatal: unable to access 'https://ausgov.visualstudio.com/Project/_git/Repo': SSL Certificate problem: self-signed certificate in certificate chain" Do I need to import the proxy CA issuing cert in Devops portal somewhere to resolve this or does the SSL inspection needs to be removed? Has anybody got it to work with proxy inspection still turned on?15Views0likes1CommentWorkspace failure
Hi Community, I had my Databricks workspace up and running and it was managed through terraform, and encryption was enabled through cmk, there were some updation in the code, so I put terraform plan, one of the key changes(replace) it showed me was "azurerm_role_assignment.storage_identity_kv_access module.workspace.azurerm_role_assignment.storage_identity_kv_access" the terraform run was running for 30 min, and the workspace was in deployment for long time and then ultimately got failed. Again, as all the changes were not done, I reapplied, and I got this error "Performing CreateOrUpdate: unexpected status 400 (400 Bad Request ) With error: InvalidEncryptionConfiguration: Configure encryption for workspace at creation is not allowed, configure encryption once workspace is created and key vault access policies are added" Again, I applied and everything and terraform run succeeded but I can see in azure portal that workspace is in failed state, but if I go to Databricks account I can see Databricks as running and if I go to workspace, I am able to start clusters and execute some queries. I am not able to launch the workspace using azure portal, not sure there will be other issues due to this. Could anyone help me to resolve this issue. Let me know if you need anything further to investigate the issue.42Views0likes1CommentRecent Logic Apps Failures with Defender ATP Steps – "TimeGenerated" No Longer Recognized
Hi everyone, I’ve recently encountered an issue with Logic Apps failing on Defender ATP steps. Requests containing the TimeGenerated parameter no longer work—the column seems to be unrecognized. My code hasn’t changed at all, and the same queries run successfully in Defender 365’s Advanced Hunting. For example, this basic KQL query: DeviceLogonEvents | where TimeGenerated >= ago(30d) | where LogonType != "Local" | where DeviceName !contains ".fr" | where DeviceName !contains "shared-" | where DeviceName !contains "gdc-" | where DeviceName !contains "mon-" | distinct DeviceName Now throws the error: Failed to resolve column or scalar expression named 'TimeGenerated'. Fix semantic errors in your query. Removing TimeGenerated makes the query work again, but this isn’t a viable solution. Notably, the identical query still functions in Defender 365’s Advanced Hunting UI. This issue started affecting a Logic App that runs weekly—it worked on May 11th but failed on May 18th. Questions: Has there been a recent schema change or deprecation of TimeGenerated in Defender ATP's KQL for Logic Apps? Is there an alternative column or syntax we should use now? Are others experiencing this? Any insights or workarounds would be greatly appreciated!78Views1like3CommentsExpressRoute Gateway routing changes
I have an ExpressRoute Gateway that shows some route changes in the metrics. I'd like to see what routes changed, but I can't find them. I tried the "BGP route updates" query under Monitoring > Logs. However, it says "No results found from the specified time range." I am in the right time range. Can I see what routes changed another way? Should I be able to view the route changes the way that I was trying?38Views0likes2Comments"Invalid JWT Error When Sending Messages from Azure Bot to Skype User
I'm encountering an "Invalid JWT" error when trying to send a non-reply message from an Azure Bot to a Skype user, despite using what appears to be a valid token. Here's a breakdown of my setup: I successfully generate an access token using OAuth client credentials for the Microsoft Bot Framework. I create a conversation ID successfully, but when I attempt to send a message using this ID, I receive a 401 error with "Invalid JWT." Here is the relevant part of my code: import requests # Setup for token generation service_url = "https://smba.trafficmanager.net/apis" token_url = f'https://login.microsoftonline.com/{tenantId}/oauth2/v2.0/token' token_headers = {'Content-Type': 'application/x-www-form-urlencoded'} token_payload = { 'grant_type': 'client_credentials', 'client_id': app_id, 'client_secret': app_password, 'scope': 'https://api.botframework.com/.default' } # Token request token_response = requests.post(token_url, headers=token_headers, data=token_payload) token = token_response.json()['access_token'] # Setup for creating a conversation conversation_headers = {'Authorization': f'Bearer {token}', 'Content-Type': 'application/json'} conversation_url = f"{service_url}/v3/conversations" conversation_payload = { "bot": {"id": f"28:{app_id}", "name": "botname"}, "isGroup": False, "members": [{"id": skype_id, "name": "Milkiyas Gebru"}], "topicName": "New Conversation" } conversation_response = requests.post(conversation_url, headers=conversation_headers, json=conversation_payload) conversation_id = conversation_response.json()["id"] # Setup for sending a message message_url = f"{service_url}/v3/conversations/{conversation_id}/activities" message_headers = {'Authorization': f'Bearer {token}', 'Content-Type': 'application/json'} message_payload = {"type": "message", "text": "My bots reply"} message_response = requests.post(message_url, headers=message_headers, json=message_payload) print("Create Message Response: ", message_response.json(), message_response.status_code) The response I get indicates an authorization error: Create Message Response: {'error': {'code': 'AuthorizationError', 'message': 'Invalid JWT.'}} 401 Has anyone experienced this issue before, or does anyone know what might be causing the JWT to be considered invalid? Any insights or suggestions would be greatly appreciated!620Views0likes1CommentCost-effective alternatives to control table for processed files in Azure Synapse
Hello, good morning.In Azure Synapse Analytics, I want to have a control table for the files that have already been processed by the bronze or silver layers. For this, I wanted to create a dedicated pool, but I see that at the minimum performance level it charges 1.51 USD per hour (as I show in the image), so I wanted to know what other more economical alternatives I have, since I will need to do inserts and updates to this control table and with a serverless option this is not possible.76Views0likes2CommentsKiosk Mode with Azure Virtual Desktop
Hello All, Just want to check if anyone using Kiosk mode with Azure Virtual Desktop (Remote Desktop client or Windows app). The purpose is to enhance the user experience. User just login on endpoint (preferably Windows 11) with their Domain credentials and as soon they login, they can get the Remote Desktop Client APP open where they can click on their assigned AVD and starts using. Please do share your thoughts and experience. Thanks41Views0likes1CommentHow to rigger an azure synapse pipeline after a file is dropped into an azure file-share.
I am currently looking for ideas to trigger an azure synapse pipeline after a file is dropped into an azure file-share. I feel that azure functions might be a suitable candidate to implement this. Azure synapse pipelines don't natively support this at the moment. Microsoft do offer a custom events trigger extension capability. However, so far I have found very little evidence demonstrating how to leverage this capability to trigger my synapse pipeline. Any assistance with approaches to solving this would be greatly appreciated. Thanks.286Views0likes1CommentBulk Start/Stop of Azure Virtual Desktop Session Hosts in a Host Pool via Single Trigger
Hi Community, We manage an Azure Virtual Desktop (AVD) host pool with a large number of session hosts (e.g., around 100), and we’re looking for a way to start or stop all session hosts in bulk using a single trigger—preferably via PowerShell or an API. Currently, we use a scheduled script that loops through each VM individually to start or stop them, but this approach doesn't scale well. We’ve noticed that the Azure Portal provides a one-click option to start or stop all session hosts in a host pool, and we’re trying to replicate that behavior programmatically. What We’re Looking For: A PowerShell command or script that can start/stop all session hosts in a host pool without iterating through each VM. If PowerShell doesn’t support this directly, is there an ARM template, Azure CLI command, REST API, or any other method that can be triggered from PowerShell to perform this bulk action? Any official documentation, community guidance, or examples from someone who has achieved this would be greatly appreciated. Goal: To simplify and optimize our automation by using a single command or API call to manage all session hosts in a host pool, rather than looping through each machine individually. Thanks in advance for your help and suggestions!91Views0likes3CommentsHow to revert back WorkItems' old component after changing them to new component
I mistakenly changed the components of different areas to a single component. I created a query for components without realizing that there was a mistake in the query and changed the components of all the areas to a single component. Because of this 13K worktimes got updated. It is not possible to change each item manually. Is there any way to restore the old components of those work items? I explored the APIs which return the list of all the work items in the query. Also, I explored the API which retrieves the information for old and new component values. I am not sure how can I change back to the values of the old components for all 13K work items. Get work item history API: https://dev.azure.com/astera/Centerprise/_apis/wit/workItems/{id}/updates?api-version=5.1 Get work item ids from query: https://dev.azure.com/astera/Centerprise/_apis/wit/wiql/{queryid}?api-version=6.0697Views0likes1CommentIntegration between Asana and Microsoft SQL Servers using Microsoft Azure DevOps?
Hi, I work for a bespoke kitchen company. We currently are using SQL Servers (on-premise). A lot of our software is integrated whereby any data input onto one system, would update on our other systems. We have recently started using Asana which is not integrated into our SQL Servers. According to the Asana website, it can integrate with Microsoft Azure. We have been advised by an external party to use Microsoft Azure 'DevOps' to achieve this. Is this possible to do and if so, what are the steps required to do this? Kind regards, Will Ko1.2KViews0likes2Comments
Events
Recent Blogs
- 2 MIN READWe’re back with the July update for the AI Toolkit for Visual Studio Code! This month marks a major release—version 0.16.0—bringing features we teased in the June prerelease into full availability, a...Jul 11, 2025257Views0likes0Comments
- This blog is co-authored with Pravin Jha, Principal Product Manager, Exadata and Database Cloud Product Management Many enterprises run mission-critical applications on Oracle Exadata Database Serv...Jul 10, 2025218Views0likes0Comments