infrastructureascode
2 Topicsπ Git-Driven Deployments for Microsoft Fabric Using GitHub Actions
π Introduction If you've been working with Microsoft Fabric, you've likely faced this question: "How do we promote Fabric items from DEV β QA β PROD reliably, consistently, and with proper governance?" Many teams default to the built-in Fabric Deployment Pipelines β and they work great for simpler scenarios. But what happens when your enterprise demands: π Centralized governance across all platforms (infra, app, and data) π Full audit trail of every change tied to a Git commit β Approval gates with reviewer-based promotion π Per-environment service principal isolation π§© Alignment with your existing DevOps standards That's exactly the problem we set out to solve. In this post, I'll walk you through a production-ready, enterprise-grade CI/CD solution for Microsoft Fabric using the fabric-cicd Python library and GitHub Actions β with zero dependency on Fabric Deployment Pipelines. π― What Problem Are We Solving? Traditional Fabric promotion workflows often look like this: Step Method Problem Build in DEV workspace Fabric Portal UI β Works fine Promote to QA Fabric Deployment Pipeline or manual copy β οΈ No Git traceability Promote to PROD Fabric Deployment Pipeline with approval β οΈ Separate governance model from app/infra CI/CD Rollback π€· Manual recreation β No deterministic rollback path Audit "Who clicked what, when?" β Limited trail The Core Issue Fabric Deployment Pipelines introduce a parallel governance model that's disconnected from how your platform and application teams already work. You end up with: π Two different promotion systems (GitHub Actions for apps, Fabric Pipelines for data) π³οΈ Governance blind spots between the two π° Cultural friction ("Why do data teams have a different process?") Our Approach: Git as the Single Source of Truth π βββββββββββββββ push to main βββββββββββββββ β Developer β βββββββββββββββββββΆ β GitHub β β commits to β β Actions β β Git repo β β Workflow β βββββββββββββββ ββββββββ¬βββββββ β βββββββββββββββββββΌββββββββββββββββββ βΌ βΌ βΌ ββββββββββββ ββββββββββββ ββββββββββββ β π’ DEV β β π‘ QA β β π΄ PROD β β Auto ββββββΆβ Approval ββββββΆβ Approval β β Deploy β β Required β β Required β ββββββββββββ ββββββββββββ ββββββββββββ Every deployment originates from Git. Every promotion is traceable to a commit SHA. Every environment has its own approval gate. One pipeline model β across everything. ποΈ Solution Architecture π Repository Structure fabric-cicd-project/ β βββ π .github/ β βββ π workflows/ β β βββ π fabric-cicd.yml # GitHub Actions pipeline β βββ π CODEOWNERS # Review enforcement β βββ π dependabot.yml # Automated dependency updates β βββ π config/ β βββ π parameter.yml # Environment-specific parameterization β βββ π deploy/ β βββ π deploy_workspace.py # Main deployment entrypoint β βββ π validate_repo.py # Pre-deployment validation β βββ π workspace/ # Fabric items (Git-integrated / PBIP) β βββ π .env.example # Environment variable template βββ π .gitignore βββ π ruff.toml # Python linting config βββ π requirements.txt # Pinned dependencies βββ π SECURITY.md # Vulnerability disclosure policy βββ π README.md π§ Key Components Component Purpose fabric-cicd Python library Deploys Fabric items from Git to workspaces (handles all Fabric API calls internally) deploy_workspace.py CLI entrypoint β authenticates, configures, deploys, logs parameter.yml Find-and-replace rules for environment-specific values (connections, lakehouse IDs, etc.) validate_repo.py Pre-flight checks β validates repo structure, parameter.yml presence, .platform files fabric-cicd.yml GitHub Actions workflow β orchestrates validate β DEV β QA β PROD β¨ Feature Deep Dive 1οΈβ£ Per-Environment Service Principal Isolation π Instead of a single shared service principal, each environment gets its own: DEV_TENANT_ID / DEV_CLIENT_ID / DEV_CLIENT_SECRET QA_TENANT_ID / QA_CLIENT_ID / QA_CLIENT_SECRET PROD_TENANT_ID / PROD_CLIENT_ID / PROD_CLIENT_SECRET Why this matters: π‘οΈ Least-privilege access β the DEV SP can't touch PROD π Audit clarity β you know which identity deployed where π₯ Blast radius reduction β a compromised DEV secret doesn't affect PROD The deploy script automatically resolves the correct credentials based on TARGET_ENVIRONMENT, with fallback to shared FABRIC_* variables for simpler setups. 2οΈβ£ Environment-Specific Parameterization ποΈ A single parameter.yml drives all environment differences: find_replace: - find: "DEV_Lakehouse" replace_with: DEV: "DEV_Lakehouse" QA: "QA_Lakehouse" PROD: "PROD_Lakehouse" - find: "dev-sql-server.database.windows.net" replace_with: DEV: "dev-sql-server.database.windows.net" QA: "qa-sql-server.database.windows.net" PROD: "prod-sql-server.database.windows.net" β Same Git artifacts β different runtime bindings per environment β No manual edits between promotions β Easy to review in pull requests 3οΈβ£ Approval-Gated Promotions β The GitHub Actions workflow uses GitHub Environments with reviewer requirements: Environment Trigger Approval π’ DEV Automatic on push to main None β deploys immediately π‘ QA After successful DEV deploy β Requires reviewer approval π΄ PROD After successful QA deploy β Requires reviewer approval Reviewers see a rich job summary in GitHub showing: π Git commit SHA being deployed π― Target workspace and environment π¦ Item types in scope β±οΈ Deployment duration β / β Final status 4οΈβ£ Pre-Deployment Validation π Before any deployment runs, a dedicated validate job checks: Check What It Does π workspace exists Ensures Fabric items are present π parameter.yml exists Ensures parameterization is configured π .platform files present Validates Fabric Git integration metadata π ruff check deploy/ Lints Python code for syntax errors and bad imports If validation fails, no deployment runs β across any environment. 5οΈβ£ Full Git SHA Traceability π Every deployment logs and surfaces the exact Git commit being deployed: Why this matters: π Rollback = git revert <sha> + push β pipeline redeploys previous state π΅οΈ Audit = every PROD deployment tied to a specific commit, reviewer, and timestamp π Diff = git diff v1..v2 shows exactly what changed between deployments 6οΈβ£ Concurrency Control π¦ concurrency: group: fabric-deploy-${{ github.ref }} cancel-in-progress: false Two rapid pushes to main won't cause parallel deployments fighting over the same workspace. The second run queues until the first completes. 7οΈβ£ Smart Path Filtering π§ paths-ignore: - "**.md" - "docs/**" - ".vscode/**" A README-only commit? A docs update? No deployment triggered. This saves runner minutes and avoids unnecessary approval requests for QA/PROD. 8οΈβ£ Retry Logic with Exponential Backoff π The deploy script wraps fabric-cicd calls with retry logic: Attempt 1 β fails (HTTP 429 rate limit) β³ Wait 5 seconds Attempt 2 β fails (HTTP 503 transient) β³ Wait 15 seconds Attempt 3 β succeeds β Transient Fabric service issues don't break your pipeline β the deployment retries automatically. 9οΈβ£ Orphan Cleanup π§Ή Set CLEAN_ORPHANS=true and items that exist in the workspace but not in Git get removed: Workspace has: Notebook_A, Notebook_B, Notebook_C Git repo has: Notebook_A, Notebook_B β Notebook_C gets removed (orphan) This ensures your workspace exactly matches your Git state β no drift, no surprises. π Dependency Management with Dependabot π€ # .github/dependabot.yml updates: - package-ecosystem: "pip" schedule: interval: "weekly" - package-ecosystem: "github-actions" schedule: interval: "weekly" fabric-cicd, azure-identity, and GitHub Actions versions are automatically monitored. When updates are available, Dependabot opens a PR β keeping your pipeline secure and current. 1οΈβ£1οΈβ£ CODEOWNERS Enforcement π₯ # .github/CODEOWNERS /deploy/ @platform-team /config/ @platform-team /.github/workflows/ @platform-team Changes to deployment scripts, parameterization, or the workflow require review from the platform team. No one accidentally modifies the pipeline without oversight. 1οΈβ£2οΈβ£ Job Timeouts β±οΈ Job Timeout Validate 10 minutes Deploy (DEV/QA/PROD) 30 minutes A hung process won't burn 6 hours of runner time. It fails fast, alerts the team, and frees the runner. 1οΈβ£3οΈβ£ Security Policy π‘οΈ A dedicated SECURITY.md provides: π§ Responsible vulnerability disclosure process β° 48-hour acknowledgement SLA π Best practices for contributors (no secrets in code, least-privilege SPs, 90-day rotation) π The Complete Workflow Here's what happens end-to-end when a developer merges a PR: 1. π¨βπ» Developer merges PR to main β 2. π VALIDATE job runs β β Repo structure checks β β Python linting (ruff) β β parameter.yml validation β 3. π’ DEPLOY-DEV job runs (automatic) β π Authenticates with DEV SP β π¦ Deploys all items to DEV workspace β π Logs commit SHA + summary β 4. π‘ DEPLOY-QA job waits for approval β π Reviewer checks job summary β β Reviewer approves β π Authenticates with QA SP β π¦ Deploys all items to QA workspace β 5. π΄ DEPLOY-PROD job waits for approval β π Reviewer checks job summary β β Reviewer approves β π Authenticates with PROD SP β π¦ Deploys all items to PROD workspace β 6. π Done β all environments in sync with Git π Comparison: This Approach vs. Fabric Deployment Pipelines Capability Fabric Deployment Pipelines This Solution (fabric-cicd + GitHub Actions) Source of truth Workspace β Git Promotion trigger UI click / API call β Git push + approval Approval gates Fabric-native β GitHub Environments (same as app teams) Audit trail Fabric activity log β Git commits + GitHub Actions history Rollback Manual β git revert + auto-redeploy Cross-platform governance Separate model β Unified with infra/app CI/CD Parameterization Deployment rules β parameter.yml (reviewable in PR) Secret management Fabric-managed β GitHub Secrets + per-env SP isolation Drift detection Limited β Orphan cleanup (CLEAN_ORPHANS=true) π Getting Started Prerequisites 3 Fabric workspaces (DEV, QA, PROD) Service principal(s) with Contributor role on each workspace GitHub repository with Actions enabled GitHub Environments configured (dev, qa, prod) Quick Setup # 1. Clone the repo git clone https://github.com/<your-org>/fabric-cicd-project.git # 2. Install dependencies pip install -r requirements.txt # 3. Copy and fill environment variables cp .env.example .env # 4. Run locally against DEV python deploy/deploy_workspace.py GitHub Actions Setup Create GitHub Environments: dev, qa (add reviewers), prod (add reviewers) Add secrets to each environment: DEV_TENANT_ID, DEV_CLIENT_ID, DEV_CLIENT_SECRET QA_TENANT_ID, QA_CLIENT_ID, QA_CLIENT_SECRET PROD_TENANT_ID, PROD_CLIENT_ID, PROD_CLIENT_SECRET DEV_WORKSPACE_ID, QA_WORKSPACE_ID, PROD_WORKSPACE_ID Push to main β the pipeline takes over! π π‘ Lessons Learned After implementing this pattern across several engagements, here are the key takeaways: β What Works Well Teams love the Git traceability once they experience a clean rollback Approval gates in GitHub feel natural to platform engineers Parameter.yml changes in PRs create great review conversations about environment differences Job summaries give reviewers confidence to approve without digging into logs β οΈ Watch Out For Cultural resistance is the #1 blocker β invest in enablement, not just automation Fabric items with runtime state (data in lakehouses, refresh history) aren't captured in Git Secret rotation across 3+ environments needs process discipline (consider OIDC federated credentials) Run a "portal vs. pipeline" side-by-side demo early β it changes minds fast π€ For CSAs: Sharing This With Customers This solution is ideal for customers who: βοΈ Already use GitHub Actions for application or infrastructure CI/CD βοΈ Have governance requirements that demand Git-based audit trails βοΈ Operate multiple Fabric workspaces across environments βοΈ Want to standardize their promotion model across all workloads βοΈ Are moving from Power BI Premium to Fabric and want to modernize their DevOps practices π£οΈ Conversation Starters "How are you promoting Fabric items between environments today?" "Is your data team using the same CI/CD patterns as your app teams?" "If something goes wrong in production, how quickly can you roll back to the previous version?" π Resources π¦ fabric-cicd on PyPI π fabric-cicd Documentation π GitHub Actions Documentation ποΈ Microsoft Fabric Git Integration πGit Repository URL: vinod-soni-microsoft/FABRIC-CICD-PROJECT: Enterprise-grade CI/CD solution for Microsoft Fabric using fabric-cicd Python library and GitHub Actions. Git-driven deployments across DEV β QA β PROD with environment approval gates, per-environment service principal isolation, and parameterized promotion β no Fabric Deployment Pipelines required. π Conclusion The shift from UI-driven promotion to Git-driven CI/CD for Microsoft Fabric isn't just a technical upgrade β it's a governance and cultural alignment decision. By using fabric-cicd with GitHub Actions, you get: π One source of truth (Git) π One promotion model (GitHub Actions) β One approval process (GitHub Environments) π One audit trail (Git history + Actions logs) π One security model (GitHub Secrets + per-env SPs) No parallel governance. No hidden drift. No "who clicked what in the portal." Just Git, code, and confidence. πͺ Have questions or want to share your experience? Drop a comment below β I'd love to hear how your team is approaching Fabric CI/CD! πSupercharging NVAs in Azure with Accelerated Connections
Hello folks, If you run firewalls, routers, or SDβWAN NVAs in Azure and your pain is connection scale rather than raw Mbps, there is a feature you should look at: Accelerated Connections. It shifts connection processing to dedicated hardware in the Azure fleet and lets you size connection capacity per NIC, which translates into higher connectionsβperβsecond and more total active sessions for your virtual appliances and VMs. This article distills a recent E2E chat I hosted with the Technical Product Manager working on Accelerated Connections and shows you how to enable and operate it safely in production. The demo and guidance below are based on that conversation and the current public documentation. What Accelerated Connections is (and what it is not) Accelerated Connections is configured at the NIC level of your NVAs or VMs. You can choose which NICs participate. That means you might enable it only on your highβthroughput ingress and egress NICs and leave the management NIC alone. It improves two things that matter to infrastructure workloads: Connections per second (CPS). New flows are established much faster. Total active connections. Each NIC can hold far more simultaneous sessions before you hit limits. It does not increase your nominal throughput number. The benefit is stability under high connection pressure, which helps reduce drops and flapping during surges. There is a small latency bump because you introduce another βbump in the wire,β but in application terms it is typically negligible compared to the stability you gain. How it works under the hood In the traditional path, host CPUs evaluate SDN policies for flows that traverse your virtual network. That becomes a bottleneck for connection scale. Accelerated Connections offloads that policy work onto specialized data processing hardware in the Azure fleet so your NVAs and VMs are not capped by host CPU and flowβtable memory constraints. Industry partners have described this as decoupling the SDN stack from the server and shifting the fastβpath onto DPUs residing in purposeβbuilt appliances, delivered to you as a capability you attach at the vNIC. The result is much higher CPS and active connection scale for virtual firewalls, load balancers, and switches. Sizing the feature per NIC with Auxiliary SKUs You pick a performance tier per NIC using Auxiliary SKU values. Today the tiers are A1, A2, A4, and A8. These map to increasing capacity for total simultaneous connections and CPS, so you can rightβsize cost and performance to the NICβs role. As discussed in my chat with Yusef, the mnemonic is simple: A1 β 1 million connections, A2 β 2 million, A4 β 4 million, A8 β 8 million per NIC, along with increasing CPS ceilings. Choose the smallest tier that clears your peak, then monitor and adjust. Pricing is per hour for the auxiliary capability. Tip: Start with A1 or A2 on ingress and egress NICs of your NVAs, observe CPS and active session counters during peak events, then scale up only if needed. Where to enable it You can enable Accelerated Connections through the Azure portal, CLI, PowerShell, Terraform, or templates. The setting is applied on the network interface. In the portal, export the NICβs template and you will see two properties you care about: auxiliaryMode and auxiliarySku. Set auxiliaryMode to AcceleratedConnections and choose an auxiliarySku tier (A1, A2, A4, A8). Note: Accelerated Connections is currently a limited GA capability. You may need to sign up before you can configure it in your subscription. Enablement and change windows Standalone VMs. You can enable Accelerated Connections with a stop then start of the VM after updating the NIC properties. Plan a short outage. Virtual Machine Scale Sets. As of now, moving existing scale sets onto Accelerated Connections requires reβdeployment. Parity with the standalone flow is planned, but do not bank on it for current rollouts. Changing SKUs later. Moving from A1 to A2 or similar also implies a downtime window. Treat it as an inβplace maintenance event. Operationally, approach this iteratively. Update a lowerβtraffic region first, validate, then roll out broadly. Use activeβactive NVAs behind a load balancer so one instance can drain while you update the other. Operating guidance for IT Pros Pick the right NICs. Do not enable on the management NIC. Focus on the interfaces carrying high connection volume. Baseline and monitor. Before enabling, capture CPS and active session metrics from your NVAs. After enabling, verify reductions in connection drops at peak. The point is stability under pressure. Capacity planning. Start at A1 or A2. Move up only if you see sustained saturation at peak. The tiers are designed so you do not pay for headroom you do not need. Expect a tiny latency increase. There is another hop in the path. In real application flows the benefit in fewer drops and higher CPS outweighs the added microseconds. Validate with your own A/B tests. Plan change windows. Enabling on existing VMs and resizing the Auxiliary SKU both involve downtime. Use activeβactive pairs behind a load balancer and drain one side while you flip the other Why this matters Customers in regulated and highβtraffic industries like health care often found that connection scale forced them to horizontally expand NVAs, which inflated both cloud spend and licensing, and complicated operations. Offloading the SDN policy work to dedicated hardware allows you to process many more connections on fewer instances, and to do so more predictably. Resources Azure Accelerated Networking overview: https://learn.microsoft.com/azure/virtual-network/accelerated-networking-overview Accelerated connections on NVAs or other VMs (Limited GA): https://learn.microsoft.com/azure/networking/nva-accelerated-connections Manage accelerated networking for Azure Virtual Machines: https://learn.microsoft.com/azure/virtual-network/manage-accelerated-networking Network optimized virtual machine connection acceleration (Preview): https://learn.microsoft.com/azure/virtual-network/network-optimized-vm-network-connection-acceleration Create an Azure Virtual Machine with Accelerated Networking: https://docs.azure.cn/virtual-network/create-virtual-machine-accelerated-networking Next steps Validate eligibility. Confirm your subscription is enabled for Accelerated Connections and that your target regions and VM families are supported. Learn article Select candidate workloads. Prioritize NVAs or VMs that hit CPS or flowβtable limits at peak. Use existing telemetry to pick the first region and appliance pair. 31 Pilot on one NIC per appliance. Enable on the dataβpath NIC, start with A1 or A2, then stop/start the VM during a short maintenance window. Measure before and after. 32 Roll out iteratively. Expand to additional regions and appliances using activeβactive patterns behind a load balancer to minimize downtime. 33 Rightβsize the SKU. If you observe sustained headroom, stay put. If you approach limits, step up a tier during a planned window. 34203Views0likes0Comments