pipeline
13 TopicsAzurepipeline Extension- doesn't give any error nor able to show dynamic dropdown value
I have created an extension, pushed to Marketplace & then used it in my org all this went so smooth. then when I started building Pipe line at TASK step when I choose my extension it is populating a field that has options pre-defined. but when it comes to dynamic it says "Not Found" aka empty. Details:- Custom step has 3 fields. Category- Cars [ pre defined option list ] Color - Blue [ pre defined option list ] Car List - this used endpoint - https://gist.githubusercontent.com/Satyabsai/b3970e2c3d229de2c70f1def3007ccfc/raw/02dc6f7979a83287adcb6eeecddb5575bef3516e/data.json ******************** TASK.JSON file**************************** { "id": "d9bafed3-2b89-4a4e-89b8-21a3d8a4f1d3", "name": "TestExecutor", "friendlyName": "Execute ", "description": "Executes", "helpMarkDown": "", "category": "Test", "author": "s4legen", "version": { "Major": 5, "Minor": 4, "Patch": 0 }, "instanceNameFormat": "Execute Test Suite $(carlist)", "inputs": [ { "name": "category", "type": "pickList", "defaultValue": "Web", "label": "Category", "required": true, "helpMarkDown": "Select the ", "options": { "mobile": "car", "web": "truck", "api": "Plan" } }, { "name": "Color", "type": "pickList", "defaultValue": "Blue", "label": "color", "required": true, "helpMarkDown": "Select the ", "options": { "nonProd": "Blue", "prod": "Red" } }, { "name": "Carlist", "type": "pickList", "defaultValue" :"BMWX7", "label": "Carlist", "required": true, "helpMarkDown": "Select the list to execute", "properties": { "EditableOptions": "true", "Selectable": "true", "Id": "CarInput" }, "loadOptions": { "endpointUrl": "https://gist.githubusercontent.com/Satyabsai/b3970e2c3d229de2c70f1def3007ccfc/raw/02dc6f7979a83287adcb6eeecddb5575bef3516e/data.json", "resultSelector": "jsonpath:$[*]", "itemPattern": "{ \"value\": \"{value}\", \"displayValue\": \"{displayValue}\" }" } } ], "execution": { "Node16": { "target": "index.js" } }, "messages": { "TestSuiteLoadFailed": "Failed to load test from endpoint. Using default options." } } ************** ************************* const tl = require('azure-pipelines-task-lib/task'); const axios = require('axios'); const TEST_ENDPOINT = 'https://gist.githubusercontent.com/Satyabsai/b3970e2c3d229de2c70f1def3007ccfc/raw/02dc6f7979a83287adcb6eeecddb5575bef3516e/data.json'; async function getValue(field) { if (field === 'Carlist') { try { const response = await axios.get(TEST_ENDPOINT, { timeout: 5000 }); return { options: response.data.map(item => ({ value: item.value, displayValue: item.displayValue })), properties: { "id": "CarlistDropdown" } }; } catch (error) { tl.warning(tl.loc('TestLoadFailed')); } } return null; } async function run() { try { const color = tl.getInput('color', true); const category = tl.getInput('category', true); const carlist = tl.getInput('Carlist', true); const result = await axios.post(tl.getInput('clicQaEndpoint'), { testSuite, category, environment }, { timeout: 10000 }); tl.setResult(tl.TaskResult.Succeeded, `Execution ID: ${result.data.executionId}`); } catch (err) { tl.setResult(tl.TaskResult.Failed, err.message); } } module.exports = { run, getValue }; ******************** CAN SOMEONE TELL ME WHAT JSON RESPONSE IS ACCPETABLE BY AZURE TO POPULATE DROPDOWN DYNAMICALLY SOURCE IS api63Views0likes1Commentdynamic pipeline and deployments with approvals
I am looking to create a multistage YAML pipeline in Azure DevOPS that: * Dynamically generates jobs based on the value(s) of a variable from a previous step (e.g., host names). * Ensures that each dynamically created job includes its own approval step, either through ManualApproval@0 or via an environment. The challenge I am facing is that while "strategy: matrix" allows for the dynamic creation of build jobs, this strategy is not permitted for deployments, which are necessary for implementing approval steps. Do you have any suggestions on how to resolve this issue?70Views0likes1CommentADF Data Flow Fails with "Path does not resolve to any file" — Dynamic Parameters via Trigger
Hi guys, I'm running into an issue with my Azure Data Factory pipeline triggered by a Blob event. The trigger passes dynamic folderPath and fileName values into a parameterized dataset and mapping data flow. Everything works perfectly when I debug the pipeline manually or trigger the pipeline manually with the trigger and pass in the values for folderPath and fileName directly. However, when the pipeline is triggered automatically via the blob event, the data flow fails with the following error: Error Message: Job failed due to reason: at Source 'CSVsource': Path /financials/V02/Forecast/ForecastSampleV02.csv does not resolve to any file(s). Please make sure the file/folder exists and is not hidden. At the same time, please ensure special character is not included in file/folder name, for example, name starting with _ I've verified the blob file exists. The trigger fires correctly and passes parameters The path looks valid. The dataset is parameterized correctly with @dataset().folderPath and @dataset().fileName I've attached screenshots of: 🔵 00-Pipeline Trigger Configuration On Blob creation 🔵 01-Trigger Parameters 🔵 02-Pipeline Parameters 🔵 03-Data flow Parameters 🔵 04-Data flow Parameters without default value 🔵 05-Data flow CSVsource parameters 🔵 06-Data flow Source Dataset 🔵 07-Data flow Source dataset Parameters 🔵 08-Data flow Source Parameters 🔵 09-Parameters passed to the pipeline from the trigger 🔵 10-Data flow error message https://primeinnovativetechnologies-my.sharepoint.com/:b:/g/personal/john_primeinntech_com/EYoH5Sm_GaFGgvGAOEpbdXQB7QJFeXvbFmCbZiW85PwrNA?e=0yjeJR What could be causing the data flow to fail on file path resolution only when triggered, even though the exact same parameters succeed during manual debug runs? Could this be related to: Extra slashes or encoding in trigger output? Misuse of @dataset().folderPath and fileName in the dataset? Limitations in how blob trigger outputs are parsed? Any insights would be appreciated! Thank youSolved142Views0likes1CommentWhat Are the Ways to Dynamically Invoke Pipelines in ADF from Another Pipeline?
I am exploring different approaches to dynamically invoke ADF pipelines from within another pipeline as part of a modular and scalable orchestration strategy. My use case involves having multiple reusable pipelines that can be called conditionally or in sequence, based on configuration stored externally (such as in a SQL Managed Instance or another Azure-native source). I am aware of a few patterns like using the Execute Pipeline activity within a ForEach loop, but I would like to understand the full range of available and supported options for dynamically invoking pipelines from within ADF. Could you please clarify the possible approaches for achieving this? Specifically, I am interested in: Using ForEach with Execute Pipeline activity How to structure the control flow for calling multiple pipelines in sequence or parallel. How to pass pipeline names dynamically. Dynamic pipeline name resolution Is it possible to pass the pipeline name as a parameter to the Execute Pipeline activity? How to handle validation when the pipeline name is dynamic? Parameterized execution Best practices for passing dynamic parameters to each pipeline when calling them in a loop or based on external config. Calling ADF pipelines via REST API or Web Activity When would this be preferred over native Execute Pipeline? How to handle authentication and response handling? If there are any recommendations, gotchas, or best practices related to dynamic pipeline orchestration in ADF, I would greatly appreciate your insights. Thanks!49Views0likes0CommentsHow to Orchestrate ADF Pipelines as Selectable Steps in a Configurable Job
I am working on building a dynamic job orchestration mechanism using Azure Data Factory (ADF). I have multiple pipelines in ADF, and each pipeline represents a distinct step in a larger job. I would like to implement a solution where I can dynamically select or deselect individual pipeline steps (i.e., ADF pipelines) as part of a job. The idea is to configure a job by checking/unchecking steps, and then execute only the selected ones in sequence or based on dependencies. Available resources for this solution: Azure Data Factory (ADF) Azure SQL Managed Instance (SQL MI) Any other relevant Azure-native service (if needed) Could you please suggest a solution that meets the following requirements: Dynamically configure which pipelines (steps) to include in a job. Add or remove steps without changing hardcoded logic in ADF. Ensure scalability and maintainability of the orchestration logic. Keep the solution within the scope of ADF, SQL MI, and potentially other Azure-native services (no external apps or third-party orchestrators). Any design pattern, architecture recommendations, or examples would be greatly appreciated. Thanks!42Views0likes0CommentsEnhancing Azure DevOps Traceability: Tracking Pipeline-Driven Resource Changes
Introduction One of the key challenges in managing cloud infrastructure with Azure DevOps is maintaining clear traceability between pipeline executions and modifications to Azure resources. Unlike some integrated CI/CD platforms, Azure does not natively link resource changes to specific DevOps pipelines, making it difficult to track which deployment introduced a particular modification. To address this gap, two approaches have been presented that improve traceability and accountability: Proactive tagging of resources with pipeline metadata during deployment. Analyzing Azure Activity Logs to trace resource changes back to specific DevOps pipeline runs. By implementing these solutions, organizations can enhance governance, streamline debugging, and ensure better auditability of Azure DevOps deployments. Challenges in Traceability Without a built-in integration between Azure DevOps pipelines and Azure resource modifications, teams face difficulties such as: Lack of direct ownership tracking for resource updates. Manual investigation of deployment logs, which is time-consuming. Difficulty in debugging incidents caused by misconfigured or unauthorized changes. Approach 1: Proactive Tagging of Azure Resources in Pipelines This approach embeds metadata directly into Azure resources by adding tags at deployment time. The tags include: Pipeline Name (‘PipelineName’) Pipeline ID (‘PipelineId’) Run ID (‘RunId’) These tags serve as persistent markers, allowing teams to quickly identify which pipeline and build triggered a specific resource change. Link to Github repository can be found here. Implementation A YAML pipeline file (‘PipelineTaggingResources.yml’) is used to automate tagging as part of the deployment process. Key Steps Pipeline triggers on updates to the main branch. Tags are dynamically generated using environment variables. Tags are applied to resources during deployment. Pipeline logs and artifacts store metadata for verification. Code Snippet Below is an example of applying tags in an Azure DevOps pipeline: - task: AzureCLI@2 displayName: "Tag Resources with Pipeline Metadata" inputs: azureSubscription: "Your-Service-Connection" scriptType: bash scriptLocation: inlineScript inlineScript: | az tag create --resource-id "/subscriptions/xxxxx/resourceGroups/xxxx/providers/Microsoft.Web/sites/MyAppService" \ --tags PipelineName=$(Build.DefinitionName) PipelineId=$(Build.DefinitionId) RunId=$(Build.BuildId) Approach 2: Using Azure Activity Logs to Trace Resource Changes This approach is useful after changes have already occurred. It analyzes Azure Activity Logs to track modifications and correlates them with Azure DevOps build logs. Implementation A PowerShell script (‘ActivityLogsTrace.ps1’) extracts relevant details from Azure Activity Logs and compares them with DevOps pipeline execution records. Key Steps Retrieve activity logs for a given resource group and timeframe. Filter logs by operation type, e.g., ‘Microsoft.Web/serverfarms/write’. List Azure DevOps pipelines and their recent runs. Compare pipeline logs with activity log metadata to find a match. Output the responsible pipeline name and run ID. Comparison of Approaches Approach When to Use Advantages Limitations Tagging Azure Resources in Pipelines Before deployment (Proactive) Immediate traceability. No need to query logs Requires modifying deployment scripts Using Azure Activity Logs After changes have occurred (Reactive) Works for any past modification. No need to modify pipelines Requires Azure Monitor logs. Manual correlation of logs Which Approach Should You Use? For new deployments? Tagging resources is the best approach. For investigating past changes? Use Azure Activity Logs. For a robust solution? Combine both approaches for full traceability. Conclusion Tagging resources in Azure DevOps pipelines provides immediate traceability, while analyzing Azure Activity Logs enables retrospective identification of resource modifications. Tagging requires modifying deployment scripts, whereas log analysis works post-change but needs manual correlation. Combining both ensures robust traceability. For details, check the GitHub Repository.Azure Devops Pipeline for Power Platform solution : Tenant to Tenant
I have a query related to Azure Devops Pipeline. Is it possible to move form one tenant to another tenant. Is it possible to move canvas app solution using Azure Devop Pipelines. If yes, I am using Sharepoint lists. Can this be moved, as it wont be the part of solution, should it be done manually? I am also using environment variables, How will this be mapped in the receiver tenant using pipelines. I have few solutions that i need to move from one solution to another every sprint. So what will be the suitable plan/license for it. Can you please share relevant documents or steps or related videos if any regarding this.220Views0likes1CommentNeed help to set up a pipeline using tasks and YAML to deploy a BICEP template and use a param file
I am trying to set up a pipeline that will deploy a BICEP template written by our cloud team and then use a param file to pass in the application specifics to create the Azure resource group and app services for an application. Ideally do not want to do this in powershell and instead in YAML or even better using the tasks in the pipeline builder. Currently everything I have tried doesn't work and errors. Has anyone done this before and have a proven method or any tips?817Views0likes1CommentHow to disable AzureDevOps Pipeline Parallelism
I have just created an organization in AzureDevops in the Free Tier. And everytime I try to create a Pipline ( even the starter/empty one ) I get this error. ##[error]No hosted parallelism has been purchased or granted. To request a free parallelism grant, please fill out the following form https://aka.ms/azpipelines-parallelism-request I tried to google around to find how can I change the settings to Parallelism and disable it or deactivate it, but I couldn't find anything useful. I tried: demands: Limit -equals DisAbleParallel but didn't do anything. Probably this is only for the self-hosted. I am not sure what could be the next thing that I need to do. I appreciate any help,6.5KViews0likes1CommentValidate changed files in last commit
Hi, I created the Pipeline using https://marketplace.visualstudio.com/items?itemName=touchify.vsts-changed-files and also test manually. The problem is when execute the pipeline, I need validate only the new files but the Azure Pipeline execute a reset HARD and therefore I don't have multiple commit in pipeline and I can't validate the files that different in new commit. How can I validate only the new files? My company has repositories in Azure DevOps of .Net and inside the repository has the SRC directory with multiple functions. Each function contains its respective pipeline then I need check what function had alterations for run the pipleine.2.2KViews0likes0Comments