Forum Discussion
Best practice to integrate to Azure DevOps?
Different sources suggesting different recommendations regarding ADF and ADO integration. Some say to use 'adf_publish' branch, while some suggest to use 'main' branch to be source for triggering yaml pipelines and disabling 'Publish' function in ADF. I guess practices are changing and setup could be different. The problem is finding all this information on the Internet makes it so confusing.
So, the question is what is the best practice now (taking into account all the latest changes in ADO) regarding branches? How you set up your ADF and ADO integrations?
3 Replies
- Joe_Smith_OpsHubCopper Contributor
Hi alwaysLearner,
What data are you looking to integrate as ETL pipelines may not be best way to integrate ADO due to poor performance, traceability, and long-term maintainability.
Why ETL Isn’t Ideal for Azure DevOps Integration:
ETL tools are built for data warehousing, not collaborative, high-change systems like ADO. Here's why they fall short:
- No support for incremental updates – ETL often reloads entire datasets instead of just syncing what's changed. This leads to:
- Heavy API usage, often hitting ADO rate limits
- Performance issues on both source and target systems
- Increased risk of data collisions and sync failures
- Loss of context – ETL typically ignores:
- Comments and threaded discussions
- Status transitions or approvals
- Attachments, images, and test artifacts
- Cross-links (e.g., test case → bug → user story relationships)
- No real-time collaboration – ETL runs on a schedule, so teams work with stale data—breaking agility and traceability.
When to Use a Proper Application Integration Platform
For active systems like ADO, where teams are continuously working across tools (e.g., Jira, ServiceNow, GitHub, Tosca), you need:
- Real-time, event-based sync
- Full fidelity of data—including workflows, transitions, and artifacts
- Incremental updates that respect system limits and reduce load
- End-to-end traceability for compliance and audits
That’s where application integration platforms (like OpsHub Integration Manager) come in—they're designed to preserve context, minimize impact, and scale with your teams.
Hope it helps!
- jikujaBrass Contributor
umm I think OP asked how to deploy data factory with Azure Devops, not how to run ETL processes on ADO.
- No support for incremental updates – ETL often reloads entire datasets instead of just syncing what's changed. This leads to:
- Joe_Smith_OpsHubCopper Contributor
Hi alwaysLearner,
What data are you looking to integrate as ETL pipelines may not be best way to integrate ADO due to poor performance, traceability, and long-term maintainability.
Why ETL Isn’t Ideal for Azure DevOps Integration:
ETL tools are built for data warehousing, not collaborative, high-change systems like ADO. Here's why they fall short:
- No support for incremental updates – ETL often reloads entire datasets instead of just syncing what's changed. This leads to:
- Heavy API usage, often hitting ADO rate limits
- Performance issues on both source and target systems
- Increased risk of data collisions and sync failures
- Loss of context – ETL typically ignores:
- Comments and threaded discussions
- Status transitions or approvals
- Attachments, images, and test artifacts
- Cross-links (e.g., test case → bug → user story relationships)
- No real-time collaboration – ETL runs on a schedule, so teams work with stale data—breaking agility and traceability.
When to Use a Proper Application Integration Platform
For active systems like ADO, where teams are continuously working across tools (e.g., Jira, ServiceNow, GitHub, Tosca), you need:
- Real-time, event-based sync
- Full fidelity of data—including workflows, transitions, and artifacts
- Incremental updates that respect system limits and reduce load
- End-to-end traceability for compliance and audits
That’s where application integration platforms (like OpsHub Integration Manager) come in—they're designed to preserve context, minimize impact, and scale with your teams.
Hope it helps!
- No support for incremental updates – ETL often reloads entire datasets instead of just syncing what's changed. This leads to: