analytics
829 TopicsSecure Medallion Architecture Pattern on Azure Databricks (Part II)
Disclaimer: The views in this article are my own and do not represent Microsoft or Databricks. This article is part of a series focused on deploying a secure Medallion Architecture. The series follows a top-down approach , beginning with a high-level architectural perspective and gradually drilling down into implementation details using repeatable, code. In this part we will discuss the implementation of the pattern using GitHub Copilot If you have missed, please read first the first part of this blog series. It can be found at: Secure Medallion Architecture Pattern on Azure Databricks (Part I). I waited a while before publishing this article. Partly due to other priorities, but also because I wanted to experiment with deploying infrastructure and data pipelines using agents. At that point, I was looking to leverage agents with a spec-driven approach, and through using GitHub Copilot, I learned what skills are and how I can use them to achieve my scope. In this blog I'll share what I learned using GitHub Copilot for spec-driven development. I'll use the content from my previous article, Secure Medallion Architecture Pattern on Azure Databricks (Part I) , as a technical specification to extract implementation details and generate two outputs: Terraform code for infrastructure, platform configuration, and deployment Databricks Declarative Automation Bundles for jobs, pipelines, and other deployment-ready workload resources I've tried not to overfit the prompts within the skills I've developed, so they remain portable to other technical articles, not just the one mentioned in this blog. Separate the platform from the workload When I started the design, I decided to modularise the automation scripts by separating the platform from the actual data platform workloads. I assigned networking, storage, identities, secret scopes, and workspace configuration to Terraform, while Databricks notebook runs, job clusters, pipelines, and environment-specific deployments were developed within Databricks Declarative Automation Bundles (formerly known as Databricks Asset Bundles). That may sound obvious, but it's exactly where generated code often goes wrong. Without explicit instructions, AI tools tend to blur these boundaries and produce one oversized block of configuration. That's why my Copilot skill needs to enforce a clear contract by: Infer the architecture from the article Identify what is explicit and what is assumed Emit Terraform only for infrastructure concerns Emit bundle files only for workload concerns Leave placeholders for anything the article does not specify That last point is critical. A blog post or low-level technical specification is not a source of truth for account IDs, hostnames, catalog names, secret values, or subnet IDs. Good automation should never fabricate those values. Instead, I decided to produce a starter implementation with TODO markers wherever environment-specific values are required. Skills are a great way to get more consistent, repeatable output across runs, so I decided to use them for this project. I could have used one of the tools listed in the table below, but I chose to go my own way, into developing a Spec-Driven Development (SDD) framework which I hope it will carryon improve with time. Tool Creator Type Link Description GitHub Spec Kit GitHub Open source github/spec-kit Turns feature ideas into specs, plans, and task lists before any code is written. Works with multiple AI coding agents. Specification first, code as generated output. BMAD Method BMad Code LLC Open source bmad-code-org/BMAD-METHOD An AI-driven agile framework with specialised agents covering the full lifecycle from ideation to deployment. Scale-adaptive — adjusts planning depth from a bug fix to an enterprise system. OpenSpec Fission AI Open source Fission-AI/OpenSpec Lightweight spec layer that sits above your existing AI tools. Each change gets a proposal, specs, design, and task list. No rigid phase gates, no IDE lock-in. What are skills, and why are they a good fit? Skills are essentially reusable prompt modules that aim to force LLMs to produce repeatable answers. Within a skill, I define the behavior and then attach supporting resources or scripts so Copilot can perform the task consistently. That means a skill can do more than just "write some code." A skill can define a repeatable workflow like this: Fetch the blog URL Extract headings, paragraphs, and code snippets Normalize the article into a lightweight implementation spec Decide what belongs in Terraform Decide what belongs in the Databricks bundle Generate files in a predictable project structure Produce a TODO.md file for unresolved values This approach turns Copilot from a generic assistant into a specialized code-conversion tool. However, there are some constraints I had to be mindful of when developing skills: Context window limits. The model has limited space to read instructions, process input, and generate output. Long prompts can cause files to be cut off or steps to be skipped. Non-determinism. Output may vary between runs, even with strict instructions. I always lint, validate, and review the diff before committing. Boundary leakage. Models may invent plausible but incorrect values. The TODO.md pattern must be enforced as a rule, not a suggestion. Model and tool drift. Copilot's model and tool surface change over time. I use example inputs and outputs as repeatable sanity checks. Maintainability. A skill is code-as-prompt and will age with the platforms it targets. I keep skills narrowly scoped so they stay easy to update. I'll explain the TODO.md file in more detail later in this post. The GitHub repo The repository can be found at the link MarcoScagliola/CopilotBlogToCode Below you will find a function I have added that, when invoked, deletes all the files produced by the skills, so you can test the repo from a clean state. python .github/skills/blog-to-databricks-iac/scripts/reset_generated.py --force; If you want to tried it out, please clone and try it on your copy. In GitHub Copilot, I usually keep: Model as Auto Foer the configure tools I keep just the built-in tools selected. Below you can find the prompt that I use to run the skills and have the blog analysed. Use the blog-to-databricks-iac skill on this article: https://techcommunity.microsoft.com/blog/analyticsonazure/secure-medallion-architecture-pattern-on-azure-databricks-part-i/4459268 Inputs: workload: blg environment: dev azure_region: uksouth github_environment: To make this more repeatable and less manual, I've added a prompt file at run-blogToDatabricksIac-selected-tools.prompt.md, which can be run directly from VS Code by opening the file and clicking the run button at the top. Feel free to experiment with it and let me know what you think. Further instructions on how to use the repo are available READ_FIRST.md. Following you will find the exact repository setup I used for this workflow, starting with my initial configuration and ending with the final directory structure and files. 1. Create a new GitHub repository and clone it locally I started by creating a new repository on GitHub, then cloned it to my local machine so I could add the Copilot skill, Terraform scaffolding, and Databricks bundle files in a centralized location. git clone https://github.com/YOUR-ORG/blog-to-databricks-iac.git cd blog-to-databricks-iac This approach keeps the workflow organised from the start: the repository exists on GitHub first, and the local clone becomes the working directory for all subsequent setup steps. 2. Create the GitHub skill folder structure (first iteration) GitHub Copilot skills are file-based and centered on a SKILL.md file inside a skill folder. GitHub's current pattern places these under .github/skills/ . I used the script below to create the folder hierarchy for my initial integration. mkdir -p .github/skills/blog-to-databricks-iac/scripts mkdir -p .github/skills/blog-to-databricks-iac/templates mkdir -p infra/terraform mkdir -p databricks-bundle/resources mkdir -p databricks-bundle/src This script generates the structure depicted below. 3. Add the main skill definition Next, I created the SKILL.md file at .github/skills/blog-to-databricks-iac/ . The orchestrator decides what happens and in what order, while each specialist decides what its own file should contain (as an example the Terraform specialist owns the Terraform, the bundle specialist owns the bundle, and so on). In practice, SKILL.md turns Copilot from a general assistant into a domain-specific generator for this repo. GitHub documents this SKILL.md-based structure as the foundation of agent skills. My first iteration of .github/skills/blog-to-databricks-iac/SKILL.md> was very simple and can be found here. 4. Add a script to fetch and normalize the blog article Next, I created a Python script that the main orchestrator SKILL.md invokes to read the blog article. This script is stored at .github/skills/blog-to-databricks-iac/scripts/ and named fetch_blog.py . Within SKILL.md , the script is invoked as shown below. ### 1. Fetch article ```bash python .github/skills/blog-to-databricks-iac/scripts/fetch_blog.py "<url>" ``` If fetch fails, stop and return the fetch error output. Do not retry; surface the error to the user and wait for guidance.</url> The script validates the URL, fetches the HTML with a 30-second timeout, and uses a spoofed Mozilla User-Agent to avoid being blocked by CDNs (Content Delivery Networks). It reads through the HTML one tag at a time, flagging when it enters relevant sections like paragraphs, headings, or code blocks, and buffering text until the tag closes. Before storing anything, it cleans the text by decoding HTML objects, collapsing whitespace, and trimming edges. As it parses, the script also scans for cloud platform keywords: AWS, S3, Azure, ADLS, GCP, Google Cloud. The first match wins; if none are found, it returns unknown. This is a quick heuristic, not authoritative. Finally, it outputs clean JSON with the extracted data: title, headings, paragraphs, code blocks, and cloud hint, capped at reasonable sizes to keep the output manageable. If anything goes wrong, such as a network error, timeout, bad HTML, or empty content, the script exits cleanly with a structured error message, making it easy to integrate into larger workflows without surprises. The Python scrip can be found here. 5. The output and output contract Now I needed to think about the output I wanted GitHub Copilot to deliver through the skills. To reiterate, I needed the following: File Name Description README.md This is the operator-facing runbook that turns the generated artifacts into a working deployment. It contains no unresolved placeholders and no embedded credentials. The header summarizes the architecture and links back to the source blog. A prerequisites section lists required Azure access, Entra permissions, GitHub Environment setup, and local CLI versions. It includes tables of always-required GitHub secrets and variables, plus conditional ones based on deployment mode. Step-by-step numbered sections walk through bootstrapping the deployment principal and populating the GitHub Environment. Workflow blocks describe each Terraform validation, infrastructure deployment, and DAB deployment step, including file paths, triggers, and outputs. A commands section lists the exact Terraform and Databricks bundle sequences to run. Finally, assumption notes point the operator to TODO.md and SPEC.md for context. TODO.md The operator's checklist of remaining tasks. It uses a strict five-section format (Heading, What this is, Why deferred, Source, Resolution, Done looks like) with no commands or code, only concepts and decisions. Each section captures a different layer of post-deployment work, pre-deployment tasks like RBAC roles and GitHub secrets, deployment-time inputs like region and environment, post-infrastructure setup like Key Vault secrets and external locations, post-DAB work like Unity Catalog grants and job schedules, and architectural choices the orchestrator couldn't make (network posture, schemas, partitioning). Every entry comes from something the article left unstated, plus the universal post-deploy work for any Databricks deployment. The operator works through TODO.md sequentially, resolving each item before the system is production-ready. SPEC.md The structured, source-faithful read of the blog article, organized by checklist. Every item is marked as a stated value, inferred from code or diagrams, or "not stated in article." It includes architecture details, Azure services configuration, Databricks setup, data model, security and identity requirements, and observations. SPEC.md is the single source of truth that Terraform and DAB generators read from, TODO.md is populated from every "not stated" entry, and README.md references it for assumptions. This ensures the deployment is built on documented decisions, not hidden assumptions. Together, these files create a clear boundary: SPEC.md answers what the blog says, TODO.md captures what's missing or must be decided, README.md tells you exactly how to deploy. This split is enforced by validation rules that fail if any content duplicates across the three files. To make these files as repeatable as possible, I needed two things: Two templates, one for README.md and one for TODO.md , that the orchestrator fills in from SPEC.md at generation time. A broader delivery contract, output-contract.md , which lists the five files the orchestrator must produce. README.md and TODO.md are two of those five, and the templates are how they get produced. The output-contract.md file defines a strict, ordered format that the agent must follow when transforming a blog article about Databricks-on-Azure architecture into a runnable repository. The first commit was deliberately minimal, as you can see from the file available here. No leaf-skill routing, no repo-context.md, no GitHub Actions workflows, no validation rules, no entry-field templates for TODO.md . That commit's single job was to lock down the shape of the output: what gets produced and in what order. Every commit since has refined how to produce that shape without changing what gets produced. Putting the contract in the very first commit gave every later change a fixed reference point. Every leaf skill, generator script, and validation rule I've added since has fit into one of its five sections. The pipeline has changed; the deliverables haven't. The structure of the GitHub repo at commit 17ab443 can be see in the pictorial below. 6. The README.md and TODO.md templates After iteratively working on the orchestrator, a clear pattern emerged, the code-generation paths were kind of stable, but the documentation outputs weren't. Every run produced README.md and TODO.md from scratch in free-form Markdown. Across runs, the same content kept drifting. Section ordering changed between runs and the explanation of GitHub Environments was rewritten with subtle wording differences. RBAC roles appeared sometimes as lists, sometimes in prose, sometimes split across sections. Universal post-deploy actions (create the secret scope, populate the vault, set up Unity Catalog grants) were re-derived every time, occasionally with steps missing. The root cause was that the orchestrator was treating durable, universal content as if it were per-run content. So I've decided to add two templates: README.md.template and TODO.md.template. Templates separate universal content (RBAC, TODO sections, GitHub setup) in the template from per-workload content (catalog names, credentials) substituted from SPEC.md. This delivers consistency across runs. The README and TODO are structurally identical, so readers can navigate them intuitively. Universal content is correct by construction; I write it once, review carefully, and every run inherits that quality. Validation also becomes more precise, and the agent's job shrinks from open-ended writing to mechanical substitution, which is easier to validate and maintain. Templates introduce clear vocabulary: {placeholder} is filled by the orchestrator at generation time, by the deployer at run time. Finally, templates enforce traceability: every "not stated in article" entry in SPEC.md automatically becomes a TODO entry via the from SPEC.md slot, making this an automatically-enforced rule. I'm invoking the templates in the orchestrator as shown below. The Git commit with this code can be found at this link. ### 3.1 Generate README from template Load the template: `.github/skills/blog-to-databricks-iac/templates/README.md.template` ### 3.2 Generate TODO from template Load the template: `.github/skills/blog-to-databricks-iac/templates/TODO.md.template` 7. The output of the fetch_blog.py file and the interaction with the orchestrator When the orchestrator invokes fetch_blog.py , the script produces a JSON output and passes it back to the orchestrator. The orchestrator then reads the JSON document into its working context and maps each field onto an analysis checklist. The title and meta description establish the article identity and scope. Headings with their levels reveal the structure, helping the agent locate sections about architecture, security, data flow, and naming. Paragraphs provide evidence for stated values like regions, resource types, and RBAC models. Code blocks become the source of inferred values. As an example, a Terraform snippet might reveal SKU choices or naming patterns not mentioned in the text. These inferred values get tagged "inferred from code snippet" when recorded. The cloud hint acts as a sanity check that the article actually describes an Azure architecture. For every checklist item, the agent records either an extracted value or the literal string "not stated in article". This becomes SPEC.md , the single source of truth for everything downstream. SPEC.md drives every subsequent step. Steps 3 through 7 (the Terraform module, workflows, and Databricks bundle generators) read architectural decisions from it. Step 8 then produces TODO.md by converting every "not stated in article" entry into a TODO item the operator must resolve before deployment. What I find worth pointing out is how little the output contract has actually moved since that very first commit. The implementation underneath has changed completely. Leaf skills emerged, generator scripts came in, validation rules got added, a soft-delete state machine showed up to handle Key Vault recovery. None of those existed at the start. But what the orchestrator delivers, the list of files it puts on disk, has stayed exactly the same. We have a much larger SKILL.md today that still mirrors the initial five-item output list. The contract itself has changed by exactly one line: the addition of "Design of the architecture" to section 5. SPEC.md : the structured, source-faithful read of the article, organised by the analysis checklist ( link ) TODO.md : the operator's checklist of everything the article didn't specify, plus the universal post-deploy actions ( link ) Terraform code under infra/terraform/ : the platform layer with networking, storage, identities, Key Vault, workspace ( link ) Databricks Asset Bundle under databricks-bundle/ : the workload layer with jobs, entry points, environment configuration ( link ) README.md : the operator runbook, with the architecture design diagram embedded ( link ) If the JSON contains an error, the orchestrator stops immediately. Per the skill rule "If fetch fails, stop and return the fetch error output. Do not retry," the error surfaces to the user rather than propagating downstream. So the script's output is the raw evidence pack: title, structure, prose, code, cloud hint. The agent uses it to fill the architecture spec, which parameterises every generated artifact. At this point the fetch_blog.py output is sent to Step 2 of the orchestrator, as shown in the code snippet below. ### 2. Analyse article Analyse the fetched article against the structured checklist in `.github/skills/blog-to-databricks-iac/references/blog-analysis-checklist.md`. The analysis covers the article text, diagrams, screenshots, and code snippets. And, much later in the orchestrator, Step 8 closes the loop by turning everything that's been recorded into the two operator-facing documents: ### 8. Generate README and TODO from templates Use the templates in `.github/skills/blog-to-databricks-iac/templates/`: - `README.md.template` -> `README.md` - `TODO.md.template` -> `TODO.md` 8. How this actually came together What I've described so far is how the orchestrator works currently. The reality of building it was much cumbersome , but also fun. I got from the first version to the current one by iterating. Rerun the orchestrator, find the defect, identify the rule that would have caught it, add the rule to the skill that owns the artifact, rerun. The reason I'm calling this out now, before walking through the rest of the pipeline, is that everything from this point on is a story about a specific lesson learned that way. The leaf skills exist because a single SKILL.md got too dense. The restricted-tenant guardrails exist because the deployment failed against a tenant that couldn't read Microsoft Graph. The validation harness exists because prose rules weren't catching the regressions that mattered. The soft-delete state machine exists because the same vault name kept colliding with a previous deploy. None of these rules were present from day-one. So in the next sections I'll walk through how the pipeline actually matured: how the single skill split into a graph, what the inner regenerate-fix loop felt like in practice, the day the project pivoted to support restricted tenants, the bugs that became rules, and the Key Vault soft-delete state machine that closed the project out. 9. From a single skill to a skill graph When I started, everything lived inside a single SKILL.md . It was simpler that way, and to be honest, at that point I didn't yet know which rules would actually matter. But as I kept rerunning the orchestrator on the article, a pattern emerged. Each rerun produced something that broke in a slightly different way, and the fix always belonged to a very specific concern: Terraform authoring, bundle structure, workflow generation, or the orchestration logic itself. Stuffing the rules for all of them into one file was making the orchestrator unreadable and, worse, was silently dropping rules when the context window got tight. So I split it. The orchestrator stayed at the top, kept routing the work and validating the result, and each concern got promoted to its own leaf skill. The Databricks bundle skill itself ended up needing one more split a few days later, it had got too dense, so I broke it into two leaves: databricks-yml-authoring ( link ) Python-entrypoints ( link ) The diagram below shows the shape the repo has today. The orchestrator now does almost no authoring. It owns the sequence of steps, the contract, and the validation gates, while everything else is delegated. This was the single biggest readability win. I wish I'd done it earlier. The REPO_CONTEXT.md is one extra node in that diagram that I want to call out But I'll come back to later in section 12. 10. The inner loop: rerun, fail, fix the skill If I had to describe the middle of this project in one sentence, it would be: every commit was a regeneration. I'd run the orchestrator end-to-end against the article, inspect the generated Terraform, the bundle, the workflows. I'd find a defect, identify the rule that would have prevented it, add that rule to the skill that owns the artifact, then rerun. As shown in the image below. This loop is what I think people miss when they treat AI-generated infrastructure code as a one-shot. The first run is never the deliverable. The deliverable is the skill that produces good runs. The generated files are disposable and can always be reproduced. The skill is what carries the knowledge forward. I had to actively resist the temptation to fix bugs in the generated code directly. Patching infra/terraform/main.tf by hand fixes today's run but not tomorrow's, because the rule that would prevent the bug doesn't exist anywhere. So I made it a discipline: never edit the output, always edit the skill, then regenerate. 11. Restricted-tenant compatibility The bug was simple to describe and brutal to fix: the deployment principal in the target tenant couldn't read Microsoft Graph. Any Terraform data source that resolved an Entra name to an object ID at plan time (e.g., azuread_user , azuread_group , azuread_service_principal ) blew up at terraform plan. My first instinct was to think "I just give the principal Graph permissions". But in a lot of real environments this is not possible. The principal that runs your IaC is governed by a security team, the team has a policy, and the policy says no Graph reads. The pivot was getting the skill to produce Terraform that never reads Graph. Object IDs are inputs, not lookups. They come in as trusted secrets, the workflow exports them as TF_VAR_* , and Terraform consumes them as variables. No data " azuread_* " block is allowed in the generated code, ever. I thought this was a simple fix. It wasn't. It cascaded into about six other things: App Registration vs Service Principal object IDs. The workflow was being given the wrong one. Role assignments need the Enterprise Application (Service Principal) object ID, not the App Registration object ID. The two are different objects in Entra with different IDs. I encoded the distinction in the skill as *_SP_OBJECT_ID (the Service Principal) versus *_CLIENT_ID (the App Registration's application ID). Naming carries the meaning now, so the wrong value is hard to pass. Single-principal mapping. In some tenants you only have one principal and it has to play both deployment and runtime roles. The skill grew a layer_sp_mode = existing input so the generator stops trying to create a new Service Principal and reuses the deployment one instead. Key Vault access policies, gone. Access policies were Graph-touching, and not all tenants support them anyway. The skill switched fully to RBAC role assignments (Key Vault Secrets User, and so on). A few cascading bugs followed, but this was the right call. It took some time to harden the Terraform skill against everything the restricted tenant was throwing back. Each iterations had the same shape, each orchestrator runs, hits a fresh provider error, I add the rule, run again, hit the next one. The commit subjects from that run are basically a transcript of the conversation I was having with the platform. 12. The bugs that became rules There are three bugs that I believe are worth telling the story of, because they each illustrate a slightly different lesson. The HCL trim() arity bug. The generator emitted trim(var.something) in a validation block. HCL's trim() takes two arguments, not one. The function I actually wanted was trimspace() . This is the kind of bug that any human would catch in a code review in two seconds, and which the model produced confidently because the shape of the call looked right. I added the rule to the Terraform skill ("for whitespace trimming use trimspace, never trim") and the bug never came back. Lesson: even for trivial syntactic mistakes, the fix belongs in the skill. The variable shadowing bug. The deploy workflow had a job-level env: block that set TF_VAR_key_vault_recover_soft_deleted to a static value. A detection step earlier in the workflow was supposed to compute the right value at runtime and write it via $GITHUB_ENV . The problem is that GitHub Actions resolves job-level environment variables before $GITHUB_ENV writes take effect, so the static value always won and the dynamic one was silently ignored. The fix was to never set the recovery flag at job level. It must be written in the detection step, on every code path, including the trivial "no recovery needed" path. Lesson: state must be explicit, not inherited. If a flag has three possible meanings, three code paths must each write it. The hardcoded -platform suffix. The workflow had a shell-side suffix that someone (let's be honest, the model) had invented to make the resource group name "look right". When recovery logic started running and the workflow looked for the canonical resource group, it looked for -platform instead of whatever the Terraform locals.tf actually emitted. The result was that the recovery handler was happily reaching past the real resource group and into a different one. I made it a rule in the orchestrator: workflow-invented suffixes are not permitted. Naming is owned by Terraform's locals.tf . There are seventeen more defects in the catalogue, and the pattern is the same in every case. The bug surfaces, the rule gets written, the rule lives in the skill that owns the affected artifact. There is no implementation-learnings.md in the repo. There used to be, but I've deleted it because a tracked log of past bugs, sitting next to a skill that's already supposed to encode the lessons from those bugs, is a duplication waiting to drift. I believe that if the rule is in the skill, the log is redundant. If the rule isn't in the skill, the log is an evidence that I haven't finished the work. Either way, the right place for bug history is git log. 13. Splitting "the skill" from "this repo's defaults" I then wanted the orchestrator to be portable, but every run kept needing the same handful of decisions. Which Azure region by default? Which environment names? Which catalog naming convention? These weren't part of the article. They weren't part of the Terraform skill either. They were specific to this repository's opinion about how things should be deployed. If I baked them into the orchestrator, the orchestrator stopped being portable. If I left them out, every run produced unhelpful "not stated in article" entries for the same five universal decisions. The answer was a new file called REPO_CONTEXT.md stored in the repo root. It's read by the orchestrator before generation and it carries the defaults that are owned by the repo, not by the skill. The split looks like this in practice: SKILL.md answers the question "how do I turn an article into a runnable repo?" It is portable. REPO_CONTEXT.md answers the question "what does this repo default to when the article doesn't say?" It is local. Cloning the orchestrator into another GitHub project is now a clean operation. You take the skill, you write your own REPO_CONTEXT.md , and the same generator produces output appropriate to your environment. 14. The Validations Most of the rules I'd written into the skills were prose. "Don't invent suffixes." "Object IDs are inputs, not lookups." "Every required Terraform variable must have a matching TF_VAR_* in the workflow." The model is good at following prose rules most of the time. So a few of the most regression-prone rules became executable. The most important one is scripts/validate_workflow_parity.sh . Every variable declared in infra/terraform/variables.tf must appear as a TF_VAR_* export in the deploy workflow. The script greps both files, diffs the sets, and exits non-zero if they don't match. It is run at the end of generation. If it fails, the run failed, even if everything else looks fine. This caught real bugs. The most embarrassing was a variable I'd added to variables.tf and forgot to wire through the workflow. Terraform plan would prompt interactively for it on a non-interactive runner, and the run would hang. The rule of thumb I've ended up with is: prose rules are the default, but if a rule has been violated more than twice, it gets promoted to an executable check. There's a short list of those checks now, and it's the load-bearing one. 15. Key Vault soft-delete state machine Key Vaults in Azure have soft delete on by default. When you delete a vault, it sticks around for ninety days in a "soft-deleted" state. If you try to create a vault with the same name in the same subscription during that window, the deploy fails. The right behaviour is to recover the soft-deleted vault, not create a new one. The first version of my recovery handler covered exactly one case: if the vault is soft-deleted, recover it. This worked the first time I ran it. The second time, the recovered vault came back into the previous resource group, not the new one I had just created. Terraform then tried to create a new vault in the correct resource group and failed because the name was already taken globally. The handler had no concept of "the recovered vault is in the wrong resource group." So I added that case. The third time, the previous resource group itself was gone, and the handler was looking for it to verify the move. So I added that case too. By the end, the state machine had three distinct cases and two preconditions, as shown in the diagram below. The reason I keep coming back to this state machine is that it captures something that I think is generally true about agent-generated infrastructure code. The happy path is easy and meaningless, while the value is in the failure modes. The first version that worked on a clean tenant was about ten lines of bash. The version that works on a tenant that has been deployed-into and partially-torn-down five times is six times longer, and every additional line of it corresponds to a real environmental condition that I had to learn the hard way. 16. What I've learned so far I'm not going to pretend the full list of principles below was clear to me on day one. Every single one of these was learned by getting it wrong first. Looking back at the history, though, they are the ones that survived contact with reality. The contract precedes the implementation. output-contract.md was committed before any generator existed. Locking the shape of the deliverable first meant every later change had a fixed reference point. Generators, not stencils. Workflows are produced by Python scripts that take parameters and emit YAML. When restricted-tenant logic and the soft-delete state machine arrived, they needed conditional structure that a static template can't express. Every bug becomes a rule. Patching the generated code is a tax on tomorrow's run. While patching the skill is an investment. Each concern has a clear owner. The orchestrator routes, the leaves author, and the repo context holds the local defaults. Restricted-tenant compatibility is non-negotiable. No Microsoft Graph reads from generated Terraform. Object IDs are trusted inputs. Single-principal mapping is supported. Naming is owned by Terraform. No suffixes invented in shell. The validation harness enforces this. State must be explicit, not inherited. Every workflow run writes its own flags. No reliance on env defaults from a previous step or a previous run. Validation is executable when a rule has been violated more than twice. Prose rules are the default. Promotion to a script is earned. Operator docs describe concepts, not commands. Command syntax ages out, while conceptual descriptions don't. The TODO template enforces this rule. Add strong testing at the end of the process, once all the files are generated. Each run may produce slightly different output and introduce bugs, even if the previous run was successful. End-to-end runs against dirty tenants are the truth. The acceptance test isn't a clean-room deploy. It's a deploy into a tenant that has soft-deleted vaults, lingering RGs, and existing role assignments. Until that works, the project isn't done. From time to time, skills need to be reviewed and consolidated. The summary above of the journey is the one I find most useful to share when people ask whether this approach actually goes anywhere. From an empty repo to a generator that produces a deployable, restricted-tenant-compatible infrastructure-as-code repository from a blog URL, with executable validation and a recovery state machine that survives a previously-deployed environment. The first commit was an empty workspace. The last commit was the one where the same orchestrator, run against the same blog, against a tenant carrying state from five previous runs, deployed cleanly with no manual intervention. That is what I what I was aiming to achieve when I started! Thanks for reading.316Views0likes0CommentsLegacy SSRS reports after upgrading Azure DevOps Server 2020 to 2022 or 25H2
We are currently planning an upgrade from Azure DevOps Server 2020 to Azure DevOps Server 2022 or 25H2, and one of our biggest concerns is reporting. We understand that Microsoft’s recommended direction is to move to Power BI based on Analytics / OData. However, for on-prem environments with a large number of existing SSRS reports, rebuilding everything from scratch would require significant time and effort. Since Warehouse and Analysis Services are no longer available in newer versions, we would like to understand how other on-prem teams are handling legacy SSRS reporting during and after the upgrade. Have you rebuilt your reports in Power BI, moved to another reporting approach, or found a practical way to keep existing SSRS reports available during the transition? Any real-world experience, lessons learned, or recommended approaches would be greatly appreciated.102Views0likes2CommentsResilient by Design: Azure Databricks Disaster Recovery Strategy
Introduction: From Recovery Plans to Resilience Strategy As organizations increasingly rely on Azure Databricks for mission-critical analytics and data engineering workloads, the need for robust disaster recovery (DR) strategies becomes paramount. These platforms are no longer just analytics engines, they power real-time decisions, AI models, and core business operations. Yet many organizations still approach Disaster Recovery (DR) as a reactive safeguard rather than a strategic capability. Resilience today is not about “if something fails,” but about ensuring continuity, trust, and performance under any condition. A modern DR strategy must therefore evolve beyond backup configurations and failover scripts. It must align with business priorities, regulatory requirements, risk tolerance, and operational maturity to become a core pillar of the enterprise data platform. In this context, organizations are increasingly adopting architecture patterns that enable cross-region resilience for the Azure Databricks Lakehouse. This pattern includes synchronizing Unity Catalog objects—catalogs, schemas, tables, views, function, models, and volumes—across regions, combined with scalable data movement mechanisms and secure data access approaches such as Delta Sharing and high-performance transfer tools. To help organizations operationalize this approach today, we have defined a structured strategy for synchronizing Unity Catalog objects and associated data across regions, enabling a resilient-by-design Azure Databricks architecture. This post focuses on that approach, outlining the key architectural patterns, strategic considerations, and practical implementation steps required to design and enable cross-region resilience. In October 2025, Databricks announced a Managed Disaster Recovery solution, developed in collaboration with Capital One, which includes managed replication, customer-specified failover, and read-only secondary capabilities. The approach outlined in this post serves as a complementary, customer-managed pattern, providing a practical and production-ready path for organizations to achieve robust disaster recovery and business continuity while Databricks continues to expand its native DR capabilities. Why Disaster Recovery for Azure Databricks is Different Traditional Disaster Recovery approaches do not fully apply to modern Lakehouse platforms. In Azure Databricks, resilience must account for: Tight coupling between data, compute, and metadata (Unity Catalog) Distributed pipelines (batch, streaming, ML) Decentralized workspace ownership and rapid platform growth This makes disaster recovery not just an infrastructure concern, but a data platform design challenge. Figure 1. Main Disaster Recovery Considerations Understanding the Fundamentals: RTO, RPO, and DR Trade-offs Before defining a disaster recovery strategy, it is essential to understand the core concepts that drive design decisions. Recovery Time Objective (RTO) defines how quickly a system must be restored after a disruption; while Recovery Point Objective (RPO) defines how much data loss is acceptable. These two metrics directly influence the architecture, cost, and complexity of any DR solution. As illustrated in Figure 1, there is a clear trade-off between cost and recovery performance: Active-active (hot) architectures, minimize downtime and data loss but come at a higher cost. Warm standby provides a balance between cost and recovery time. Cold DR is cost-efficient but results in longer recovery times and higher data loss risk. Understanding these trade-offs is critical to aligning DR strategy with business expectations. Understanding the Fundamentals: RTO, RPO, and DR Trade-offs Before defining a disaster recovery strategy, it is essential to understand the core concepts that drive design decisions. Recovery Time Objective (RTO) defines how quickly a system must be restored after a disruption; while Recovery Point Objective (RPO) defines how much data loss is acceptable. These two metrics directly influence the architecture, cost, and complexity of any DR solution. As illustrated in Figure 1, there is a clear trade-off between cost and recovery performance: Active-active (hot) architectures, minimize downtime and data loss but come at a higher cost. Warm standby provides a balance between cost and recovery time. Cold DR is cost-efficient but results in longer recovery times and higher data loss risk. Understanding these trade-offs is critical to aligning DR strategy with business expectations. Designing for Resilience: A Phased Disaster Recovery Approach Disaster recovery has evolved beyond a one-time setup into a structured, lifecycle-driven capability. Leading organizations design resilience intentionally, implement it systematically, and continuously validate it to ensure ongoing effectiveness. The framework outlined below provides a practical and strategic approach to operationalizing disaster recovery in Azure Databricks environments, bridging the gap between architectural intent and true operational readiness. Figure 2. Different Phases of Azure Databricks Disaster Recovery Phase 1: Discovery & Assessment A resilient disaster recovery strategy starts with clarity—yet in many Azure Databricks environments, that clarity is often missing. As platforms evolve, clusters multiply, jobs are duplicated, and data assets grow, making it increasingly difficult to answer a simple question: what do we actually have, and how critical is it? The Discovery phase addresses this by establishing a single, authoritative view of the platform. By consolidating all assets, dependencies, and usage patterns into a structured baseline, organizations can move from fragmented visibility to informed decision-making. This approach aligns closely with the concepts outlined in “From Chaos to Clarity: Your Databricks Workspace on a Single Pane of Glass”, where establishing a comprehensive inventory becomes the foundation for governance, optimization, and ultimately resilience. This foundation enables teams to identify what matters most, define appropriate RTO and RPO targets, and understand the dependencies that will ultimately shape their disaster recovery strategy. Outcome A clear, data-driven baseline of the environment—enabling confident workload prioritization and effective disaster recovery design. Phase 2: Strategy & Design Once visibility is established, the next step is making deliberate design choices—balancing resilience, cost, and complexity. At this stage, organizations define how their platform should behave under failure. This typically starts with selecting a multi-site deployment pattern, in which two primary approaches are commonly adopted: Active–Active, where both regions are fully operational and serve live workloads Active–Passive (Warm Standby), where a secondary region is pre-provisioned and activated only during failover Active–active architectures provide near-zero downtime and minimal data loss but come with increased cost and architectural complexity. Active–passive patterns offer a more cost-efficient alternative, with slightly higher recovery times depending on how failover is orchestrated. Beyond selecting the deployment pattern, a key architectural decision is how data is replicated across the Medallion architecture (Bronze, Silver, Gold). Our approach introduces a set of practical scenarios that allow organizations to tailor resilience based on both workload criticality and recovery requirements. A common starting point is aligning the DR strategy to workload tiers, such as: Tier 1 (Mission-critical): Active–Active with full replication Tier 2 (Business-critical) : Active–Passive with partial replication Building on this, organizations can further refine their approach by defining how data is replicated across the Medallion layers: Full replication (Bronze, Silver, Gold) , i.e. fastest recovery at highest cost; Bronze-only replication, lower cost, with re-computation required during recovery; Gold-only replication, optimized for consumption-focused use cases. This combination of workload tiering and Medallion replication strategies enables a flexible, fit-for-purpose approach to disaster recovery, which balances performance, cost, and operational complexity. Below we demonstrate, as an example, two representative patterns: (a) Active–Active architecture, where data pipelines operate in continuous trigger mode across regions, enabling near real-time synchronization; and (b) Active–Passive architecture, where all layers are replicated using a clone-based approach and activated on demand during failover. These scenarios highlight how organizations can balance recovery performance and cost by adjusting both the deployment model and the depth of data replication. 3. Active - Active Scenario - Continuous Trigger Mode Within the active–passive model, multiple variations can be applied, ranging from full replication of all medallion layers to more selective approaches (such as replicating only Bronze or Gold layers). This flexibility allows organizations to further balance recovery performance, cost, and operational complexity. 4. Active - Passive Scenario - Clone All Layers Mode Phase 3: Disaster Recovery Implementation & Enablement With the strategy defined, the focus shifts to translating design into a repeatable and operational solution. At this stage, resilience is no longer conceptual, it is embedded into the platform through automation, data replication, and standardized deployment patterns. From Strategy to Architecture At a high level, the DR architecture spans both the primary and secondary Azure regions, ensuring that all critical components can be either replicated or recreated: Control plane synchronization: Users, groups, and workspace assets are replicated using SCIM, Terraform, and CI/CD pipelines. Workspace and metadata portability: Jobs, notebooks, and configurations are defined as code and deployed consistently across regions. Data layer replication: Managed data, external data, and streaming checkpoints are synchronized using deep clone operations. This layered approach ensures that the platform can be reconstructed end-to-end, not just partially recovered. Unity Catalog-Driven Replication A critical aspect of the implementation is the replication of Unity Catalog metadata and associated data assets. This includes: Synchronizing catalogs, schemas, tables, views, functions, and volumes Using Delta Sharing to expose datasets across regions Leveraging deep clone and storage replication to ensure data availability Recreating external and managed locations in the target region By combining metadata synchronization with data replication, the target environment becomes a fully functional mirror of the source. 5. Unity Catalog Focused DR Mechanisms Operationalizing with a DR Pipeline To make this repeatable, the architecture is supported by a DR pipeline that orchestrates the process end-to-end: Synchronize schemas and Unity Catalog structures Perform deep clone of Delta tables Recreate views and dependent objects Provision volumes and copy associated data Ensure consistency across storage layers (e.g., ADLS via AzCopy) This pipeline can operate either continuously or on demand, depending on the selected DR pattern. 6. Azure Databricks DR Replication Workflow Outcome A fully implemented disaster recovery solution where data, metadata, and platform components are consistently synchronized, enabling rapid and reliable activation of workloads in a secondary region. Phase 4: DR Drill: Validation, Operations & Continuous Improvement A disaster recovery strategy is only valuable if it works when needed. This phase focuses on validating, operating, and continuously improving the DR solution to ensure it meets business expectations. Failover & Failback in Practice In a real failure scenario, the transition to the secondary region must be simple, predictable, and fast. A typical failover process includes: Detecting primary region unavailability Executing a final synchronization (if possible) Redirecting connections to the DR workspace Resuming operations without requiring code changes Equally important is failback, once the primary region is restored: Re-synchronizing data from DR to primary Switching pipelines and configurations back Gradually restoring normal operations Because infrastructure and metadata are standardized, this process becomes operational rather than reactive. Operating DR as a Continuous Capability Beyond failover, DR must be actively managed as part of daily operations: Monitoring & Alerting: Track job failures, performance bottlenecks, and system health Governance & Change Management: Maintain consistency between environments using IaC and version-controlled pipelines Continuous Optimization: Adjust replication strategies, scaling, and performance as workloads evolve This ensures the DR solution remains aligned with both technical and business changes over time. Ensuring Performance, Integrity, and Security A production-ready DR solution must also guarantee: Performance & Scalability: Optimize compute, autoscaling, and data transfer to handle recovery scenarios efficiently Data Integrity & Consistency: Validate schema synchronization, monitor replication jobs, and ensure parity between regions Security & Compliance: Enforce consistent access controls, secure credentials, and enable audit logging across environments Outcome A validated and continuously evolving DR capability—where recovery processes are tested, monitored, and improved over time, providing confidence to both technical teams and business stakeholders. Key Takeaways and Closing Thoughts Resilience in modern data platforms is no longer defined by how quickly systems can recover, but by how effectively they are designed to withstand disruption in the first place. Azure Databricks, as a core engine for data, analytics, and AI, requires a disaster recovery approach that extends beyond infrastructure—one that treats data, metadata, pipelines, and governance as a unified system. By combining a structured discovery phase, a strategy aligned to workload criticality, and automated, repeatable implementation patterns, organizations can move from reactive recovery to resilience by design. This not only reduces risk, but also ensures that critical data workloads remain available, trusted, and performant when it matters most. The approach outlined in this post provides a practical and flexible way to enable cross-region resilience today, while also complementing the managed disaster recovery capabilities expected to be introduced by Databricks. As we anticipate the availability of these native features, this approach offers a production-ready foundation that can extend and integrate with future platform capabilities. In a world where disruption is inevitable, the objective is no longer simply to recover—but to maintain continuity of data, decisions, and business operations with confidence. Special thank you to Vasilis Zisiadis, Dimitris Kotanis who contributed their expertise to create this material and bring it to life. Thank You Antony Bitar, Collin Brian and Jason Pereira for their support in reviewing the content.222Views0likes1CommentApproaches to Integrating Azure Databricks with Microsoft Fabric: The Better Together Story!
Azure Databricks and Microsoft Fabric can be combined to create a unified and scalable analytics ecosystem. This document outlines eight distinct integration approaches, each accompanied by step-by-step implementation guidance and key design considerations. These methods are not prescriptive—your cloud architecture team can choose the integration strategy that best aligns with your organization’s governance model, workload requirements and platform preferences. Whether you prioritize centralized orchestration, direct data access, or seamless reporting, the flexibility of these options allows you to tailor the solution to your specific needs.5.6KViews9likes1CommentTableau to Power BI Migration: Semantic Layer-First Approach for Cloud Architects
Author's: Mahjabin Ahmed, Yassine El Ouardi, Lavanya Sreedhar LavanyaSreedhar, Peter Lo PeterLo, Aryan Anmol aryananmol, Shreya Harvu shreyaharvu and Rafia Aqil Rafia_Aqil In this guide, we provide practical guidance for migrating from Tableau to Power BI, with a focus on technical best practices and architecture. Unifying business intelligence on the Microsoft Fabric platform, enterprises gain closer integration with Microsoft 365 (Teams, Copilot, Excel). For cloud solution architects and BI developers, a successful migration is not just about rebuilding dashboards in a new tool. It requires thoughtful architectural planning and a shift to a more model-centric approach to BI. Why Semantic Layer-First Architecture Matters The Traditional Migration Challenge Most Tableau to Power BI migrations follow a dashboard-centric approach: teams attempt to replicate existing Tableau workbooks, calculated fields, and LOD (Level of Detail) expressions directly into Power BI reports. While this may seem efficient initially, it creates significant downstream challenges: Duplicated logic: Each report embeds its own calculations and business rules, leading to conflicting KPIs across the organization Maintenance overhead: Changes to business logic require updating dozens or hundreds of individual reports Governance gaps: Without centralized definitions, semantic drift occurs—different teams calculate "Revenue" or "Active Customer" differently Scalability issues: As data volumes grow, report-level transformations become performance bottlenecks The Semantic Layer-First Alternative Microsoft's recommended approach centers on semantic models (formerly called datasets)—centralized, governed data models that separate business logic from visualization. In this architecture: The payoff is substantial: when data evolves or business rules change, you update the semantic model once, and all dependent reports automatically reflect the changes—no manual redesign required. Understanding Migration Complexity: Simple to Very Complex Dashboards Not all Tableau dashboards are created equal. The migration strategy should align with dashboard complexity, and the semantic layer approach becomes increasingly valuable as complexity grows. Follow a Step-by-Step Migration Strategy Migrating from Tableau to Power BI is not a one-click effort – it requires a mix of automated and manual refactoring, plus a sound change management plan. Below are key strategies and best practices for a successful migration: Audit your Tableau estate: Start by taking inventory of all existing Tableau workbooks, data sources, and dashboards. Determine what needs to be migrated (focus on high-value, widely used reports first) and identify any redundant or obsolete content that can be retired rather than converted. Conduct a proof-of-concept (PoC): Before migrating everything, pick a representative complex dashboard (or a subset of your data) and perform a pilot migration. This will help you validate that Power BI can connect to your data (e.g. setting up the Power BI gateways for on-premises sources), test performance (Import vs DirectQuery modes), and experiment with replicating key visuals or calculations. Use the PoC to uncover any surprises early – for example, test that any Level of Detail expressions or table calculations in Tableau can be re-created in DAX. The lessons learned here should inform your overall project plan. Use a phased migration approach: Plan to run Tableau and Power BI in parallel for some period, rather than switching everything at once. Migrate in waves – for example, by business unit or subject area – and incorporate user feedback as you go. This phased approach reduces risk and allows your team to improve the process with each iteration. It also gives end users time to adjust gradually. Migrate high-impact dashboards first: Prioritize the migration of key reports and dashboards that are critical to the business or have the most usage. Delivering these early wins will not only surface any technical challenges to solve but will also help demonstrate the value of Power BI’s capabilities to stakeholders. Early success builds buy-in and momentum for the rest of the migration. Reimagine (don’t just replicate) the experience: It’s rarely possible – or desirable – to exactly re-create every Tableau visualization pixel-for-pixel in Power BI. Embrace the opportunity to focus on business questions and improve user experience with Power BI’s features. For example, rather than replicating a complex Tableau workaround, you might implement a cleaner solution in Power BI using native features (like bookmarks, drilldowns, or simpler navigation between pages). Engage business users and subject matter experts during this redesign to ensure the new reports meet their needs. Enable dataset reusability: One major benefit of the Power BI approach is the ability to create shared datasets and dataflows. As you migrate, look for opportunities to create central semantic models (datasets) that can serve multiple reports. For instance, if several Tableau workbooks are all using similar data about sales, you can create one central Sales dataset in Power BI. Report creators across the organization can then build different Power BI reports on that single dataset without duplicating data or logic. This reduces maintenance and promotes a “build once, reuse often” strategy. Provide training and support: Expect a learning curve for teams moving to Power BI – especially those who are very fluent in Tableau. Plan for user upskilling and training programs. Establish a support community or office hours where new users can ask questions and get help. If possible, identify Power BI champions or recruit a Power BI Center of Excellence (COE) team who can guide others. During the transition, ensure there are subject matter experts (SMEs) available to address questions and validate that the new reports are correct. Manage change and expectations: It’s important to communicate why the organization is moving to Power BI (e.g. benefits like deeper integration, lower TCO, better governance) to get buy-in from end users. Some users may be resistant to change, especially if they’ve invested a lot of time in mastering Tableau. Prepare to handle varying responses – emphasize the personal benefits (like improved performance, new capabilities, or career growth with popular skills) to encourage adoption. Also, involve influential business users early and gather their feedback, so they feel ownership in the new solution. Establish governance from Day 1: Don’t wait until after migration to think about governance. Use this chance to set up Power BI governance aligned to best practices. Decide on important aspects such as workspace naming conventions, who can create or publish content, how you’ll monitor usage and costs, and how to manage data access and security (for example, designing a strategy for RLS/OLS/CLS, and deciding when to use per-user datasets vs. organizational semantic models). Good governance will ensure your shiny new Power BI environment doesn’t sprawl into chaos over time. Allow time for adjustment and iteration: Finally, be patient and iterative. Depending on the scale of your organization and the number of Tableau assets, a full migration can take months or even a year or more. Plan realistic transition periods where both systems might coexist. Continuously refine your approach with each wave of migration. Power BI’s frequent update cadence (monthly releases) means new features may emerge even during your project – stay updated, as new capabilities could simplify your migration (for example, the introduction of field parameters or Copilot might let you modernize certain Tableau features more easily). Reimagine (don’t just replicate) the experience (Step 5): Phase 1: Assessment and Planning 1. Audit Your Tableau Estate Inventory all workbooks, data sources, and calculated fields Identify high-traffic dashboards (prioritize for early migration) Categorize by complexity (Simple/Medium/Complex/Very Complex) 2. Design Your Semantic Architecture Map Tableau data sources to Power BI data sources (DirectQuery, Import, or Direct Lake) Plan star schema for fact/dimension tables Identify shared calculations that should live in semantic models vs. report-specific logic 3. Choose Storage Modes Source Type Recommended Mode Rationale Databricks Delta Lake Direct Lake Real-time analytics, no refresh lag Azure SQL Database DirectQuery or Import Based on data volume and refresh SLAs On-Premises SQL Server Import (via Gateway) Network latency considerations Excel/CSV files Import Small reference data Phase 2: Build the Semantic Layer 1. Create Star Schema Data Models Tableau often relies on flat, denormalized datasets. Power BI performs best with star schemas: Fact tables: Transactional data (sales, orders, events) with foreign keys to dimensions Dimension tables: Descriptive attributes (customers, products, dates) with primary keys Relationships: One-to-many from dimension to fact, leveraging bidirectional filtering sparingly 2. Migrate Calculations to DAX Measures Convert Tableau calculated fields to DAX measures in the semantic model: --Example of DAX: -- Define as measure: Total Revenue = SUMX( 'Sales', 'Sales'[Quantity] * 'Sales'[Unit Price] ) 2.1 Use Copilot to Accelerate DAX Development Leverage Copilot in Power BI Desktop to generate and validate DAX: Describe the calculation in natural language Copilot suggests DAX syntax Review, test, and refine 2.2 Document your Semantic Model Invest in creating an AI-ready foundation for your semantic model. AI systems need to understand unique business contexts in order to prioritize correct information to provide consistent and reliable responses to your end users. Name Tables and Columns Clearly: Avoid ambiguity in your semantic model. Use human-readable, business-friendly names. Avoid abbreviations, acronyms, or technical terms. This improves Copilot’s ability to interpret user intent. Create Meaningful Measures: Define reusable DAX measures for key business metrics (e.g., Revenue, Profit Margin). AI features rely on these to generate insights and summaries. Document Semantic Model objects: Add descriptions and synonyms to your Tables, Columns and measures. This enhances natural language querying and improves Copilot’s contextual understanding. Build an AI Data Schema: prepare your semantic model for AI by utilizing tooling features such as Prep data for AI. Phase 3: Understanding Migration Complexity: Simple to Very Complex Dashboards Not all Tableau dashboards are created equal. The migration strategy should align with dashboard complexity, and the semantic layer approach becomes increasingly valuable as complexity grows. 1. Dashboard Conversion Best Practices Think in "pages" not "sheets": Power BI reports combine multiple visuals per page; group related visuals logically Use slicers for interactivity: Replace Tableau filters with Power BI slicers and filter pane Leverage bookmarks for navigation: Create dynamic report experiences with show/hide containers Simple Complexity Level Category Tableau Feature Power BI Equivalent Microsoft Fabric Enhancements Best Practice Notes Data Model Single custom SQL Power Query for data shaping and ETL. OneLake Shortcuts for unified data access. Use star schema for optimized performance; push logic into the semantic layer rather than visuals. Calculations Basic IF/ELSE, SUM Data Analysis Expressions (DAX) for measures and calculated columns. Copilot for Power BI to assist with DAX creation. Fabric IQ for natural language queries. Centralize calculations in semantic models for consistency and governance. Medium Complexity Level Category Tableau Feature Power BI Equivalent Fabric Enhancements Best Practice Notes Data Model Multiple custom SQL (up to 3) Connect live to databases (Azure Databricks): DirectQuery in Power BI Connect with cloud data sources: Power BI data sources OneLake Shortcuts for unified access without databricks compute cost. Semantic Models can combine multiple sources. Optimize with star schema; Prefer OneLake Shortcuts for performance; avoid heavy transformations in visuals. Calculations Nested IFs, CASE Data Analysis Expressions (DAX) for measures and calculated columns. Copilot for Power BI to assist with DAX creation. Fabric Data Agent for conversational BI. Fabric IQ for natural language queries: Fabric IQ Centralize logic in semantic models; use Copilot for automation and validation; keep calculations reusable. Reporting Tooltip format in Bar and Map visuals Select All/Clear option for Single Select dropdown Standard tooltips offer help tooltips, text, and background formatting. Dynamic tooltip will be able to create the Tooltip page and reuse it in multiple visuals The customization is so much better than the OOB tooltips Create report tooltip pages in Power BI - Power BI | Microsoft Learn Use Clear All Slicers Button. Disable Single Select, Add Clear All Slicers button, Customize the Button and Use the Button Complex Complexity Level Category Tableau Feature Power BI Equivalent Fabric Enhancements Best Practice Notes Data Model Multiple sources Create relationship using more than one column Composite Models in Power BI (DirectQuery + Import) for combining multiple sources, also connect to various cloud services. Dataflows for pre-processing. Power BI allows a relationship between 2 tables based on only one active column. OneLake Shortcuts for unified access without Azure Databricks compute cost; Microsoft Fabric Dataflows Gen2 offers multiple ways to ingest, transform, and load data efficiently. Consolidate sources into semantic models; use Direct Lake for performance; Plan and design data model to comply with star schema supported by Power BI Relationship DAX USERELATIONSHIP DAX for activating relationships in Power BI for a specific calculation Calculations LOD, window functions Data Analysis Expressions (DAX) for measures and calculated columns. Copilot to assist with complex DAX. Fabric IQ Ontology for semantic alignment. Change how visuals interact in a Power BI report. Centralize calculations in semantic layer; use variables in DAX for readability and performance. Fabric Data Agent for a conversational BI. Very Complex Complexity Level Category Tableau Feature Power BI Equivalent Fabric Enhancements Best Practice Notes Data Model Multi-source, Excel, SQL Composite Models in Power BI (DirectQuery + Import) for combining multiple sources, also connect to various cloud services. Dataflows for pre-processing. OneLake Shortcuts for unified access; Connector overview build-in support. Mirroring for real-time sync. Combine multiple sources into well-structured semantic models for consistency and optimized performance. Calculations Predictive logic Data Analysis Expressions (DAX) for measures and calculated columns. Fabric AutoML, ML models, AI Insights, Python/R, Notebook‑based ML (Spark/Scikit‑Learn), Fabric AI Functions, Fabric IQ Ontology Fabric Data Agent for a conversational BI. Centralize logic in semantic models; leverage Copilot for automation and parameter-driven workflows. Prepare for Copilot. 2. Tableau Feature Equivalents Tableau Feature Power BI Equivalent Microsoft Learn Link Calculated Fields DAX Measures DAX Documentation Parameters Field Parameters / Bookmarks Use report readers to change visuals Actions Drillthrough / Bookmarks Drillthrough Tableau Prep Power Query / Dataflows Differences between Dataflow Gen1 and Dataflow Gen2 Tableau Server Power BI Service What is Power BI? Overview of Components and Benefits Phase 4: Governance and Deployment Workspace Planning (Dev / Test / Prod Separation) A proper workspace strategy is essential for governed deployments in Fabric and Power BI. Fabric supports separate Development, Test, and Production stages using Deployment Pipelines, enabling controlled promotions of semantic models, reports, dataflows, notebooks, lakehouses, and other items. You can assign each workspace to a pipeline stage (Dev → Test → Prod) to ensure safe lifecycle management. Sensitivity Labeling (Microsoft Purview Information Protection) Sensitivity labels allow governed classification and protection of data across Fabric items. Sensitivity labels can be applied directly to Fabric items (semantic models, reports, dataflows, etc.) through the item's header flyout or the item settings. Labels from Microsoft Purview Information Protection enforce data access rules and help organizations meet compliance requirements. Endorsement & Certification (Promoted, Certified, Master Data) Endorsement improves discoverability and trust in shared organizational content. Promoted: Item creators mark content as recommended for broader use. Certified: Administrators or authorized reviewers validate content meets organizational quality standards. Master Data: Indicates authoritative single‑source‑of‑truth items such as semantic models or lakehouses. All Fabric items except dashboards can be promoted or certified; data‑containing items can be designated as Master Data. Monitoring & Capacity Planning Determine the appropriate size for fabric capacity when migrating from Tableau to PowerBI. The Fabric SKU Estimator can generate a SKU recommendation (estimate) for your capacity requirements. Ensuring performance and cost efficiency requires ongoing monitoring of your Fabric capacity. Microsoft recommends evaluating workloads using Fabric Capacity Metrics and planning SKU sizes based on real usage. Fabric uses bursting and smoothing to handle spikes while enforcing capacity limits. Monitoring helps identify high compute usage, background refreshes, and interactive workloads to optimize performance. Fabric Data Source Connections (OneLake+ Manage Connections) Microsoft Fabric is designed as an end‑to‑end analytics platform that integrates data from many different source systems into a unified environment powered by OneLake, Data Factory, Real‑Time Analytics, Dataflows , Lakehouses, Warehouses, and Mirrored Databases. The Strategic Advantage: Semantic Layer + Fabric IQ The semantic layer-first approach sets the foundation for the next evolution in enterprise analytics. Fabric IQ (announced at Ignite 2025) is Microsoft's semantic intelligence platform that auto-elevates semantic models into ontologies—structured knowledge graphs that power AI agents, Copilot experiences, and cross-domain data reasoning. What this means for your migration: Semantic models you build today become the foundation for AI-driven analytics tomorrow Data Agents can reason across multiple semantic models, answering questions that span domains Business users transition from "report consumers" to "data explorers" via natural language interfaces Conclusion: Build for the Future, Not Just for Today Migrating from Tableau to Power BI is more than a technology swap—it's an opportunity to re-architect your analytics strategy for the cloud-native, AI-powered era. The semantic layer-first approach requires upfront investment in data modeling, DAX expertise, and Fabric platform adoption. But the payoff is transformative: Consistency: Single source of truth for all business metrics Scalability: Semantic models that serve hundreds of reports and thousands of users Agility: Changes to business logic propagate instantly across the enterprise Future-readiness: Foundation for Fabric IQ, Data Agents, and AI-driven insights Start your migration with the end in mind: not just convert dashboards, but a modern, governed, AI-ready analytics platform that scales with your business. Addressing Key Migration Concerns (1) Why a semantic‑layered model approach is better than recreating Tableau dashboards A semantic‑layered modeling approach is the optimal strategy for migration and is significantly more effective than attempting to recreate Tableau dashboards exactly as they exist. By contrast, Power BI and Fabric encourage a semantic model–first architecture, where all business rules, relationships, calculations, and transformations are centralized in a governed model that serves many dashboards. The approach not only provides consistency and reuse across the enterprise but also ensures that report authors build on a single certified version of the truth. (2) How semantic-layered model approach reduces the constant redesign caused by changing data needs. A semantic‑layered modeling approach directly addresses concern about constant changes and frequent redesigns of dashboards when data evolves. With a semantic layer, changes are absorbed in the model layer—so the logic is updated once and flows automatically into all dependent reports. Combined with Fabric features like OneLake shortcuts, Direct Lake mode, and centralized governance, the semantic layer drastically reduces breakage, minimizes rework, and ensures scalability as data continues to grow and shift. Additional Resources Direct Lake in Microsoft Fabric Create Fabric Data Agents OneLake Shortcuts Write DAX queries with Copilot - DAX Prepare Your Data for AI - Power BI | Microsoft Learn3.2KViews4likes2CommentsIssue with notifications in external network
I am having problems with seeing notifications from external network (Microsoft's network to be exact). I can see the notification count all right: However upon clicking the bell, I get this error 90% of the time: What is really weird, is that every now & then it works just fine - it might take 10 times trying the bell icon, or 50, but those notifications will at some point be shown to me all right. I seem to be in tiny minority of people having this issue. Does anyone have any guidance or troubleshooting approach? Is this a setting in my home tenant, or Microsoft tenant? Account permissions? Does a log or telemetry exist anywhere where I could start asking the right questions how to resolve this. If anyone has any insight, that would be greatly appreciated Thanks, Maciej13Views0likes0CommentsFrom Chaos to Clarity: Your Databricks Workspace on a Single Pane of Glass
The question that never stays answered — until now As Azure Databricks workspaces evolve, complexity creeps in unnoticed. Every Azure Databricks conversation with customers eventually lands on the same question: “What do we actually have in this workspace?” Over time, clusters multiply, jobs get cloned, warehouses are spun up for one-off demos and forgotten, and Unity Catalog keeps expanding until it’s hard to reason about. In most enterprises, each business or data science team operates its own workspace, while the central platform or operations team has little to no visibility into what’s being created or why. Teams often spend days—or weeks—trying to piece together what exists, who owns it, and the business purpose behind it, only to realize they still don’t have the full picture. And when the same question comes up next quarter, the cycle starts all over again. To address this, we built a utility that helps customers answer exactly that—by providing a single pane of glass for all Databricks assets through comprehensive cataloging and usage analysis. The utility works in two phases: Discovery and Analysis. This post focuses on the first step—the Discovery phase, where we establish a clear, authoritative inventory of everything that exists in the workspace. What the Discovery Phase delivers? Think of the Discovery phase as a workspace health assessment. Once configured against a target workspace, the utility runs in a selected mode and consolidates all discovered assets into a centralized, Delta-based repository. The result is a structured, queryable, and dashboard-ready metadata store. Behind the scenes, ten purpose-built scanners run in a tiered and parallelized architecture, enabling a fast yet comprehensive scan of the entire workspace. Scanner What is Cataloged Clusters Interactive, job, SQL — configs, policies, pools Jobs Workflows, schedules, tasks, run history Warehouses SQL endpoints, sizes, serverless settings Pipelines Delta Live Tables and their state Unity Catalog Catalogs, schemas, tables, volumes Workspace Objects Notebooks, repos, ML experiments, serving endpoints, alerts, Genie spaces Security Identity, network, data protection settings Billing 30–180 days of DBU usage by SKU and product Utilization Real CPU, memory, runtime patterns (deep scan) Spark Job Optimizer (plugin) Skew, spill, small files, broadcast hints (deep scan) Design Overview # Block Role Contents / Flow 1 Source Starting point — the Databricks environments being discovered. One or more Azure Databricks workspaces. Auth via OAuth. Outputs an authenticated WorkspaceClient to the Orchestrator. 2 Orchestrator The brain of the utility — coordinates scanning, concurrency, retries, timing. Tiered thread-pool executor, scan config (mode, billing window, UC depth, max workers). Dispatches scanners in controlled waves. 3 Tier 1 Scanners Lightweight, high-concurrency scans. Run first for quick signal. Clusters, Warehouses, Pipelines, Security. Up to 12 workers, 10-min timeout. Artifacts flow to the Centralized Repository. 4 Tier 2 Scanners High-volume scans. Controlled concurrency to avoid API throttling. Jobs, Workspace Objects (notebooks, repos, experiments, serving, alerts, Genie), Unity Catalog, Billing (30–180 days DBU). 1/2 workers, 30-min timeout. 5 Tier 3 Scanners Sequential, analysis-grade scans (deep scan only). Utilization (CPU, memory, SQL usage patterns) and Spark Job Optimizer plugin (skew, spill, small files, broadcast hints). Runs after Tiers 1 & 2. 6 Centralized Repository The catalog of truth — where all output lands, timestamped and queryable. Unity Catalog Delta tables (dashboard-ready) plus portable JSON and CSV exports for offline sharing or downstream tools. 7 Single Pane of Glass The user-facing view — insight at a glance. Pre-built Lakeview dashboard: KPI strip, inventory charts, and week-over-week trends. Refresh to see current workspace state. Why users love the view — visualization that earns its keep This is where the Discovery phase stops being just a scan and starts becoming a decision-making tool. Because everything is consolidated into a single, Unity Catalog–backed source of truth, the Lakeview dashboard delivers a genuine single pane of glass for the entire Databricks workspace. At a glance, you get: KPI strip at the top — total clusters, active jobs, UC tables, SQL warehouses, DLT pipelines, workspace objects. One glance, one number each. Inventory charts — clusters by type, jobs by schedule, warehouses by size, tables by catalog. The shape of your workspace becomes obvious. The “that doesn’t look right” moments — The idle SQL warehouse with zero queries, the cluster running the wrong runtime, the notebook floating outside any repo. These surface instantly, without hunting. Change over time — because every scan is timestamped, you can literally see your platform grow (or sprawl) week over week. In the first customer walkthrough, the platform team identified an always-on SQL warehouse with zero queries and three jobs running on the wrong compute tier—all within the first 30 minutes. That single view paid for the project. Sample Item Catalog Closing thoughts The Discovery phase isn’t about governance for governance’s sake—it’s about clarity. Before teams can optimize costs, improve performance, or enforce standards, they first need a reliable answer to a basic question: what actually exists today? By giving platform and operations teams a single, authoritative view of all Databricks assets—grounded in data, not tribal knowledge—Discovery turns guesswork into informed decisions. In the next phase, Analysis, that foundation is used to go deeper: identifying inefficiencies, risks, and opportunities to simplify and optimize the platform. But it all starts here—by finally knowing what you have. Special thank you to Antony Bitar, Collin Brian and Jason Pereira for their support in reviewing the content.273Views0likes0CommentsSentinel Foundry - MCP Server (Preview) (Github Community Release)
I’ve been cooking something that a lot of people in SOC have been struggling with — especially on the engineering side of Microsoft Sentinel. Thanks to the Microsoft Security team for shaping the capabilities of Sentinel even better with Sentinel Data Lake & Modern SecOps. Today’s the day I can finally share it. Note: This is not an official Microsoft product, but it is designed to make the Sentinel Build even better (complement) with much more intelligence. 🚀 Sentinel Foundry is now in public preview with 43 tools. (Sentinel Foundry - MCP Server) It’s an MCP server built to act like the brain of a strong Sentinel engineer — helping make building, improving, and operating Sentinel far more practical, faster, and honestly more enjoyable. For a lot of teams, the challenge is not understanding what Sentinel can do. The hard part is the engineering work around it: -> Deciding what data should actually be ingested -> Building a clean, scalable Sentinel foundation -> Writing useful detections instead of noisy ones -> Balancing security value with cost -> Turning ideas into deployable engineering outputs That is exactly why I built Sentinel Foundry to help communities grow stronger. It helps with the real engineering tasks behind Sentinel — from architecture thinking to detection design, deployment planning, ingestion strategy, automation ideas, and many of the workflows outlined in the GitHub project. How does it work? Here’s one of the flagship prompts I ran with it: “Give me a complete security posture report for our workspace. Score each pillar and tell me what to prioritise.” And within seconds, it produced a structured engineering blueprint that would normally take a lot longer to pull together manually. You can see the example prompts here in what it can do: https://github.com/prabhukiranveesam/Sentinel-Foundry#what-can-it-do I want building Sentinel to feel less like repetitive engineering overhead — and more like real security engineering that is fast, creative, and enjoyable. If you work with Sentinel as a SOC L2 analyst, engineer, detection engineer, consultant, or architect, I’d genuinely love for you to try it and tell me what you think. 🔗 Public Preview: https://github.com/prabhukiranveesam/Sentinel-Foundry This is just the start of an AI era — and I’m excited to keep shaping it with more powerful features over the coming days. This is very easy to set up and will be available to all of you at no cost during this month as part of the public preview, and your feedback is extremely valuable to shape this as a powerful solution.237Views0likes0CommentsFrom Manual Backfills to Autonomous Pipelines: Building an LLM-Powered Backfill Agent on Azure
Introduction: Data backfills are a common operational requirement in modern data platforms. Missing partitions, upstream delays, or failed pipeline runs often require engineers to manually identify gaps, determine the appropriate recovery window, and trigger reprocessing. This approach does not scale well it introduces operational overhead, increases the risk of human error, and requires deep knowledge of data dependencies and pipeline behavior. In this post, I describe how to build a backfill agent using Azure AI Foundry, Model Context Protocol (MCP), Azure Functions, Synapse, and ADX. The goal is to automate the decision-making process while keeping execution controlled, observable, and governed. The design separates responsibilities across three layers: Decision layer: an LLM-based agent determines whether a backfill is required and defines the recovery scope (e.g., which dates, datasets, or layers) Execution layer: an MCP server hosted on Azure Functions exposes controlled operations such as triggering pipelines and querying system state State layer: ADX tables maintain backfill control metadata, data availability signals, and execution history This separation keeps the system flexible while ensuring that all actions are traceable, auditable, and policy-driven. Importantly, this pattern is not limited to a single dataset or pipeline. It can be applied across all datasets and across all layers of a medallion architecture Bronze, Silver, and Gold with layer-specific validation rules and backfill strategies. For example, Bronze may focus on completeness of ingestion, while Silver and Gold can enforce data quality and business logic constraints before initiating recovery. The key benefit of a backfill agent is that it shifts backfilling from a manual, reactive process to an automated, intelligent, and consistent workflow. Instead of engineers investigating incidents and triggering reruns, the agent continuously evaluates data state, identifies gaps, and initiates-controlled recovery actions. This reduces operational burden, improves reliability, and ensures faster recovery from data issues while maintaining governance, observability, and strict control over execution. Architecture Overview The solution is designed as a controlled orchestration pattern that separates decision-making, execution, and state management. This allows backfill operations to be automated without compromising governance or observability. The architecture consists of four main components. Logic Apps Trigger The workflow is initiated using a Logic App. The trigger can be scheduled or invoked on demand, depending on operational requirements. It provides the input context required for the backfill evaluation, such as dataset name, processing layer, and scope constraints (for example, maximum number of dates to process). Azure AI Foundry Agent (Decision Layer) The Azure AI Foundry agent acts as the decision layer. It evaluates the request and determines whether a backfill is required, and if so, what scope should be applied. The agent does not interact directly with data systems. Instead, it invokes predefined tools exposed through the MCP server. This ensures that decision logic is flexible, while execution remains controlled. Azure Function App – MCP Server (Execution Layer) The Azure Function App hosts the MCP server and exposes a set of operations to the agent. These operations include querying missing partitions, triggering Synapse pipelines, retrieving execution status, and updating control tables. All interactions with external systems (Synapse and ADX) are handled within this layer. It is responsible for input validation, authorization, and enforcing execution rules. This abstraction ensures that infrastructure actions are not directly performed by the agent. Synapse Pipelines (Processing Layer) Backfill execution is handled by a parameterized Synapse pipeline. The pipeline follows a consistent pattern: Data is first written to a staging table Validation is performed Data is promoted to the main table only if validation succeeds This approach ensures data quality and prevents partial or invalid data from being published. Azure Data Explorer(State and Observability Layer) ADX is used as the central state store. It maintains control and execution tables that track expected partitions, missing data, pipeline runs, and execution outcomes. This enables: Detection of missing partitions Idempotent execution (avoiding duplicate processing) Full traceability of backfill operations The agent relies on this state, accessed via the MCP server, to make decisions. End-to-End Flow The Logic App triggers the workflow and passes the request context. The Foundry agent evaluates the request. The agent invokes an MCP tool to retrieve missing partitions from ADX. Based on the result, the agent determines whether a backfill is required. If required, the agent invokes an MCP tool to trigger the Synapse pipeline. The pipeline executes the backfill using a staging and validation pattern. Execution details are written to ADX. The agent returns a summary of the operation. Analytics Layer: Azure Synapse Analytics: In the Synapse workspace, I created a generic parameterized pipeline that has three steps: 1.Copy data from upstream and ingest it to ADX staging table 2. Run Data validation 3 ingest staging data to main dataset table. the pipeline gets as a parameter dataset name, partitioning date, isbackfill flag and layer and ingest dataset into kusto table. values for layer : Bronze,Silver or Gold. Kusto: In Kusto, the solution relies on the following tables: Dataset tables for example, the Customers table in this demo [the same pattern can be extended to support multiple datasets.] BackfillControl: its the central configuration and decision input for the backfill process. It defines which dataset partitions require backfill and provides the metadata needed for the agent to make execution decisions. each row in this table represents a specific dataset partition (for example, a given date in a specific layer) and its current backfill state. BackfillExecutionLog : this table is used to track the execution of backfill operations. It provides a complete record of when backfills were triggered, their outcome, and the associated pipeline runs, while the BackfillControl table defines what should be processed, the BackfillExecutionLog captures what actually happened. Code for Creating the tables: .create table BackfillExecutionLog ( ExecutionId: string, DatasetName: string, Layer: string, PartitionDate: datetime, PipelineName: string, PipelineRunId: string, TriggeredAt: datetime, TriggeredBy: string, ExecutionStatus: string, Reason: string ) .create table BackfillControl ( DatasetName: string, Layer: string, PartitionDate: datetime, BackfillRequired: bool, Status: string, DQStatus: string, RetryCount: int, MaxRetryCount: int, Reason: string ) Output examples: In this demo, Logic Apps, Synapse, and Kusto are treated as existing systems; the focus is how to expose controlled MCP tools from an Azure Function App and connect them to Azure AI Foundry agent. Microsoft’s Azure Functions MCP extension lets a Function App expose functions as MCP tools, and Foundry can connect to the deployed MCP endpoint. Steps: Step1: Create the local Function App project in VS code, run the command: mkdir backfill-kusto-mcp cd backfill-kusto-mcp func init . --worker-runtime python --python Step2: Implement the MCP tools add requirements to requirements.txt file: azure-functions>=1.24.0 azure-identity azure-kusto-data requests python-dotenv The host.json file defines runtime-level behavior for the Azure Function App. In this implementation, it is used to configure the MCP extension, logging, and extension bundles. { "version": "2.0", "extensions": { "mcp": { "system": { "webhookAuthorizationLevel": "Anonymous" } } }, "logging": { "applicationInsights": { "samplingSettings": { "isEnabled": true, "excludedTypes": "Request" }, "enableLiveMetricsFilters": true } }, "extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle.Experimental", "version": "[4.*, 5.0.0)" } } The local.settings.json file is used to define environment-specific configuration for the Azure Function App during local development. It contains application settings (environment variables) that are read by the Function App at runtime. These settings are not checked into source control and are replaced by App Settings in Azure after deployment. For example: { "IsEncrypted": false, "Values": { "AzureWebJobsStorage": "UseDevelopmentStorage=true", "FUNCTIONS_WORKER_RUNTIME": "python", "KUSTO_CLUSTER": "https://<ClusterName>.<Region>.kusto.windows.net", "KUSTO_DATABASE": "<DatabaseName>", "BACKFILL_CONTROL_TABLE": "BackfillControl", "BACKFILL_EXECUTION_LOG_TABLE": "BackfillExecutionLog", "SYNAPSE_WORKSPACE": "<SynapseWorkspaceName>", "SYNAPSE_PIPELINE": "<PipelineName>", "AUTH_MODE": "az_login", "AZURE_CLIENT_ID": "", "DEFAULT_DATASET_NAME": "Customers", "DEFAULT_LAYER": "Bronze", "MAX_DATES_DEFAULT": "5" } } For local development, AUTH_MODE is set to az_login. Before deploying to Azure Functions, change AUTH_MODE to MANAGED_IDENTITY in the Function App application settings. The function_app.py defines the main implementation of MCP server ir: Exposes MCP tools (find_backfill_candidates, trigger_backfill, run_backfill_agent, get_backfill_execution_log) Reads configuration from environment variables Authenticates using Azure CLI (local) or Managed Identity (Azure) Queries BackfillControl in Kusto to identify missing partitions Triggers Synapse pipelines for backfill Writes execution results to BackfillExecutionLog Enforces idempotency by checking if a partition was already triggered Code: import os import uuid import json import logging from datetime import datetime, timezone from urllib.parse import quote import requests import azure.functions as func from azure.identity import ManagedIdentityCredential, AzureCliCredential from azure.kusto.data import KustoClient, KustoConnectionStringBuilder app = func.FunctionApp(http_auth_level=func.AuthLevel.ANONYMOUS) logging.basicConfig(level=logging.INFO) AUTH_MODE = os.getenv("AUTH_MODE", "MANAGED_IDENTITY").lower() KUSTO_CLUSTER = os.getenv("KUSTO_CLUSTER") KUSTO_DATABASE = os.getenv("KUSTO_DATABASE") CONTROL_TABLE = os.getenv("BACKFILL_CONTROL_TABLE", "BackfillControl") EXECUTION_LOG_TABLE = os.getenv("BACKFILL_EXECUTION_LOG_TABLE", "BackfillExecutionLog") SYNAPSE_WORKSPACE = os.getenv("SYNAPSE_WORKSPACE") SYNAPSE_PIPELINE = os.getenv("SYNAPSE_PIPELINE", "Customer Dataset") DEFAULT_DATASET_NAME = os.getenv("DEFAULT_DATASET_NAME", "Customers") DEFAULT_LAYER = os.getenv("DEFAULT_LAYER", "Bronze") def utc_now() -> str: return datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ") def log_event(event: str, **properties): logging.info( "MCP_BACKFILL %s", json.dumps( { "event": event, "timestamp_utc": utc_now(), **properties, }, default=str, ), ) def require_setting(name: str, value: str | None): if not value: raise ValueError(f"Missing required app setting: {name}") def escape_kusto_string(value: str | None) -> str: if value is None: return "" return str(value).replace("\\", "\\\\").replace('"', '\\"') def get_credential(): if AUTH_MODE == "az_login": return AzureCliCredential() managed_identity_client_id = os.getenv("AZURE_CLIENT_ID") if managed_identity_client_id: log_event( "using_user_assigned_managed_identity", client_id=managed_identity_client_id, ) return ManagedIdentityCredential(client_id=managed_identity_client_id) log_event("using_system_assigned_managed_identity") return ManagedIdentityCredential() def get_kusto_client() -> KustoClient: require_setting("KUSTO_CLUSTER", KUSTO_CLUSTER) if AUTH_MODE == "az_login": kcsb = KustoConnectionStringBuilder.with_az_cli_authentication(KUSTO_CLUSTER) else: managed_identity_client_id = os.getenv("AZURE_CLIENT_ID") if managed_identity_client_id: kcsb = KustoConnectionStringBuilder.with_aad_managed_service_identity_authentication( KUSTO_CLUSTER, client_id=managed_identity_client_id, ) else: kcsb = KustoConnectionStringBuilder.with_aad_managed_service_identity_authentication( KUSTO_CLUSTER ) return KustoClient(kcsb) def execute_kusto_query(query: str): require_setting("KUSTO_DATABASE", KUSTO_DATABASE) client = get_kusto_client() response = client.execute(KUSTO_DATABASE, query) return response.primary_results[0] def execute_kusto_command(command: str): require_setting("KUSTO_DATABASE", KUSTO_DATABASE) client = get_kusto_client() return client.execute_mgmt(KUSTO_DATABASE, command) def find_backfill_candidates_core( dataset_name: str, layer: str, max_dates: int, ) -> list[dict]: dataset = escape_kusto_string(dataset_name) layer_name = escape_kusto_string(layer) query = f""" {CONTROL_TABLE} | where DatasetName == "{dataset}" | where Layer == "{layer_name}" | where BackfillRequired == true | where RetryCount < MaxRetryCount | where Status in ("Missing", "Failed") or DQStatus == "Failed" | top {int(max_dates)} by PartitionDate asc | project DatasetName, Layer, PartitionDate, Status, DQStatus, RetryCount, MaxRetryCount, Reason """ rows = execute_kusto_query(query) return [ { "DatasetName": row["DatasetName"], "Layer": row["Layer"], "PartitionDate": str(row["PartitionDate"])[:10], "Status": row["Status"], "DQStatus": row["DQStatus"], "RetryCount": row["RetryCount"], "MaxRetryCount": row["MaxRetryCount"], "Reason": row["Reason"], } for row in rows ] def was_backfill_already_triggered( dataset_name: str, layer: str, partition_date: str, ) -> bool: dataset = escape_kusto_string(dataset_name) layer_name = escape_kusto_string(layer) query = f""" {EXECUTION_LOG_TABLE} | where DatasetName == "{dataset}" | where Layer == "{layer_name}" | where PartitionDate == datetime({partition_date}) | where ExecutionStatus == "Triggered" | summarize Count = count() """ rows = list(execute_kusto_query(query)) return bool(rows and rows[0]["Count"] > 0) def write_execution_log( execution_id: str, dataset_name: str, layer: str, partition_date: str, pipeline_name: str, pipeline_run_id: str, execution_status: str, reason: str, ): command = f""" .set-or-append {EXECUTION_LOG_TABLE} <| print ExecutionId = "{escape_kusto_string(execution_id)}", DatasetName = "{escape_kusto_string(dataset_name)}", Layer = "{escape_kusto_string(layer)}", PartitionDate = datetime({partition_date}), PipelineName = "{escape_kusto_string(pipeline_name)}", PipelineRunId = "{escape_kusto_string(pipeline_run_id)}", TriggeredAt = datetime({utc_now()}), TriggeredBy = "FoundryMCPBackfillAgent", ExecutionStatus = "{escape_kusto_string(execution_status)}", Reason = "{escape_kusto_string(reason)}" """ execute_kusto_command(command) def trigger_synapse_pipeline( dataset_name: str, layer: str, partition_date: str, ) -> str: require_setting("SYNAPSE_WORKSPACE", SYNAPSE_WORKSPACE) require_setting("SYNAPSE_PIPELINE", SYNAPSE_PIPELINE) credential = get_credential() token = credential.get_token("https://dev.azuresynapse.net/.default").token encoded_pipeline_name = quote(SYNAPSE_PIPELINE, safe="") url = ( f"https://{SYNAPSE_WORKSPACE}.dev.azuresynapse.net" f"/pipelines/{encoded_pipeline_name}/createRun" f"?api-version=2020-12-01" ) payload = { "DatasetName": dataset_name, "Layer": layer, "PartitionDate": partition_date, "IsBackfill": True, } response = requests.post( url, headers={ "Authorization": f"Bearer {token}", "Content-Type": "application/json", }, json=payload, timeout=30, ) log_event( "synapse_create_run_response", status_code=response.status_code, body=response.text[:2000], ) response_json = {} try: response_json = response.json() except Exception: pass if "runId" in response_json: return response_json["runId"] raise Exception( f"Synapse trigger failed. " f"StatusCode={response.status_code}. " f"Body={response.text}" ) def trigger_backfill_core( dataset_name: str, layer: str, partition_date: str, ) -> dict: execution_id = str(uuid.uuid4()) log_event( "trigger_backfill_started", execution_id=execution_id, dataset_name=dataset_name, layer=layer, partition_date=partition_date, ) try: if was_backfill_already_triggered(dataset_name, layer, partition_date): return { "ExecutionId": execution_id, "DatasetName": dataset_name, "Layer": layer, "PartitionDate": partition_date, "Status": "Skipped", "Reason": "Backfill was already triggered for this partition.", } pipeline_run_id = trigger_synapse_pipeline( dataset_name=dataset_name, layer=layer, partition_date=partition_date, ) write_execution_log( execution_id=execution_id, dataset_name=dataset_name, layer=layer, partition_date=partition_date, pipeline_name=SYNAPSE_PIPELINE, pipeline_run_id=pipeline_run_id, execution_status="Triggered", reason="Triggered by Foundry MCP backfill agent", ) return { "ExecutionId": execution_id, "DatasetName": dataset_name, "Layer": layer, "PartitionDate": partition_date, "PipelineName": SYNAPSE_PIPELINE, "PipelineRunId": pipeline_run_id, "Status": "Triggered", } except Exception as ex: error_message = str(ex) log_event( "trigger_backfill_failed", execution_id=execution_id, dataset_name=dataset_name, layer=layer, partition_date=partition_date, error=error_message, ) try: write_execution_log( execution_id=execution_id, dataset_name=dataset_name, layer=layer, partition_date=partition_date, pipeline_name=SYNAPSE_PIPELINE or "", pipeline_run_id="", execution_status="FailedToTrigger", reason=error_message, ) except Exception as log_ex: log_event( "failed_to_write_execution_log", execution_id=execution_id, original_error=error_message, log_error=str(log_ex), ) return { "ExecutionId": execution_id, "DatasetName": dataset_name, "Layer": layer, "PartitionDate": partition_date, "Status": "FailedToTrigger", "Error": error_message, } def run_backfill_agent_core( dataset_name: str, layer: str, max_dates: int, ) -> list[dict]: log_event( "run_backfill_agent_started", dataset_name=dataset_name, layer=layer, max_dates=max_dates, ) candidates = find_backfill_candidates_core( dataset_name=dataset_name, layer=layer, max_dates=max_dates, ) results = [] for candidate in candidates: result = trigger_backfill_core( dataset_name=candidate["DatasetName"], layer=candidate["Layer"], partition_date=candidate["PartitionDate"], ) results.append(result) log_event( "run_backfill_agent_completed", dataset_name=dataset_name, layer=layer ) return results def get_execution_log_core( dataset_name: str, limit: int, ) -> list[dict]: dataset = escape_kusto_string(dataset_name) query = f""" {EXECUTION_LOG_TABLE} | where DatasetName == "{dataset}" | top {int(limit)} by TriggeredAt desc | project ExecutionId, DatasetName, Layer, PartitionDate, PipelineName, PipelineRunId, TriggeredAt, TriggeredBy, ExecutionStatus, Reason """ rows = execute_kusto_query(query) return [ { "ExecutionId": row["ExecutionId"], "DatasetName": row["DatasetName"], "Layer": row["Layer"], "PartitionDate": str(row["PartitionDate"])[:10], "PipelineName": row["PipelineName"], "PipelineRunId": row["PipelineRunId"], "TriggeredAt": str(row["TriggeredAt"]), "TriggeredBy": row["TriggeredBy"], "ExecutionStatus": row["ExecutionStatus"], "Reason": row["Reason"], } for row in rows ] @app.mcp_tool() @app.mcp_tool_property(arg_name="dataset_name", description="Dataset name, for example Customers.") @app.mcp_tool_property(arg_name="layer", description="Layer name, for example Bronze.") @app.mcp_tool_property(arg_name="max_dates", description="Maximum number of dates to return.") def find_backfill_candidates( dataset_name: str = DEFAULT_DATASET_NAME, layer: str = DEFAULT_LAYER, max_dates: int = 5, ) -> list[dict]: return find_backfill_candidates_core(dataset_name, layer, max_dates) @app.mcp_tool() @app.mcp_tool_property(arg_name="dataset_name", description="Dataset name, for example Customers.") @app.mcp_tool_property(arg_name="layer", description="Layer name, for example Bronze.") @app.mcp_tool_property(arg_name="partition_date", description="Partition date in yyyy-MM-dd format.") def trigger_backfill( dataset_name: str, layer: str, partition_date: str, ) -> dict: return trigger_backfill_core(dataset_name, layer, partition_date) @app.mcp_tool() @app.mcp_tool_property(arg_name="dataset_name", description="Dataset name, for example Customers.") @app.mcp_tool_property(arg_name="layer", description="Layer name, for example Bronze.") @app.mcp_tool_property(arg_name="max_dates", description="Maximum number of dates to trigger.") def run_backfill_agent( dataset_name: str = DEFAULT_DATASET_NAME, layer: str = DEFAULT_LAYER, max_dates: int = 5, ) -> list[dict]: return run_backfill_agent_core(dataset_name, layer, max_dates) @app.mcp_tool() @app.mcp_tool_property(arg_name="dataset_name", description="Dataset name, for example Customers.") @app.mcp_tool_property(arg_name="limit", description="Maximum number of execution log rows to return.") def get_backfill_execution_log( dataset_name: str = DEFAULT_DATASET_NAME, limit: int = 10, ) -> list[dict]: return get_execution_log_core(dataset_name, limit) Step3: . Run locally 1. Activate virtual environment: python -m venv .sally-env .\.sally-env\Scripts\activate 2. Install dependencies : pip install -r requirements.txt npm install -g azurite 3. Open 2 terminals, in one terminal run: azurite 4. in the second terminal: Login to Azure: az login start the function app: func start P.S make sure to change auth in local.settings.json file to "AUTH_MODE": "az_login" Step4: Create Azure resources and deploy # LOGIN az login # VARIABLES $RG="rg-backfill-kusto-mcp-demo" $LOCATION="westeurope" $STORAGE="stbackfillmcp$((Get-Random -Minimum 10000 -Maximum 99999))" $FUNCAPP="func-backfill-kusto-mcp-$((Get-Random -Minimum 10000 -Maximum 99999))" # CREATE RESOURCE GROUP az group create --name $RG --location $LOCATION # CREATE STORAGE ACCOUNT az storage account create ` --name $STORAGE ` --resource-group $RG ` --location $LOCATION ` --sku Standard_LRS # CREATE FUNCTION APP az functionapp create ` --resource-group $RG ` --consumption-plan-location $LOCATION ` --runtime python ` --runtime-version 3.11 ` --functions-version 4 ` --name $FUNCAPP ` --storage-account $STORAGE ` --os-type Linux # ENABLE MANAGED IDENTITY az functionapp identity assign ` --resource-group $RG ` --name $FUNCAPP # GET PRINCIPAL ID $FUNC_PRINCIPAL_ID = az functionapp identity show ` --resource-group $RG ` --name $FUNCAPP ` --query principalId ` --output tsv Write-Host "Function App Principal ID: $FUNC_PRINCIPAL_ID" # CONFIGURE APP SETTINGS az functionapp config appsettings set ` --resource-group $RG ` --name $FUNCAPP ` --settings ` AUTH_MODE=MANAGED_IDENTITY ` KUSTO_CLUSTER="https://<ClusterName>.<Region>.kusto.windows.net" ` KUSTO_DATABASE="<DatabaseName>" ` BACKFILL_CONTROL_TABLE="BackfillControl" ` BACKFILL_EXECUTION_LOG_TABLE="BackfillExecutionLog" ` SYNAPSE_WORKSPACE="<SynapseWorkspaceName>" ` SYNAPSE_PIPELINE="<PipelineName>" ` DEFAULT_DATASET_NAME="Customers" ` DEFAULT_LAYER="Bronze" ` MAX_DATES_DEFAULT="5" # DEPLOY FUNCTION APP (RUN FROM PROJECT FOLDER) func azure functionapp publish $FUNCAPP # GET MCP ENDPOINT $MCP_ENDPOINT="https://$FUNCAPP.azurewebsites.net/runtime/webhooks/mcp" Write-Host "MCP Endpoint: $MCP_ENDPOINT" # GET MCP KEY $MCP_KEY = az functionapp keys list ` --resource-group $RG ` --name $FUNCAPP ` --query "systemKeys.mcp_extension" ` --output tsv Write-Host "MCP Key: $MCP_KEY" # TEST MCP TOOL $body = @{ jsonrpc = "2.0" id = "1" method = "tools/call" params = @{ name = "find_backfill_candidates" arguments = @{ dataset_name = "Customers" layer = "Bronze" max_dates = 1 } } } | ConvertTo-Json -Depth 10 Invoke-RestMethod ` -Uri $MCP_ENDPOINT ` -Method POST ` -Headers @{ Accept = "application/json, text/event-stream" "x-functions-key" = $MCP_KEY } ` -ContentType "application/json" ` -Body $body After deployment, the Function App’s managed identity must be granted the appropriate permissions in both Kusto and Synapse with Function app principal id , this allows the Function App to query Kusto tables and trigger Synapse pipelines without issues. Step5: . Connect the MCP server to Azure AI Foundry Go to Azure AI Foundry portal Navigate to your Project Open your Agent Add MCP as a tool : Go to Tools Click Add Tool Select: Custom → Model Context Protocol (MCP) Configure custom MCP and click on Save MCP endpoint: https://<function-app-name>.azurewebsites.net/runtime/webhooks/mcp Step6: Define Agent instructions You are a Backfill Reliability Agent. You MUST use the backfill_agent MCP tool. Do NOT ask the user for candidate dates. When asked to run backfill: Find dataset name 1. Call find_backfill_candidates with dataset_name layer max_dates 2. Then call run_backfill_agent with dataset_name, layer= max_dates Return the PipelineRunId. Note: The instructions are very generic; you need to modify it based on your business scenario. Step7: Test prompt Now in Synapse Monitor: Search for PipelineRunId: df1b1920-09dd-415b-bbe9-d810d8505f58: Future Enhancements: The backfill agent automates recovery by detecting missing or failed data and triggering controlled reprocessing via MCP. It can scale across all datasets and medallion layers (Bronze, Silver, Gold) with layer-specific rules. The design can evolve into a multi-agent workflow for example, if backfill fails multiple times, a notification agent can automatically send emails or create incidents for upstream teams. Overall, this shift backfilling from a manual, reactive task to an automated, governed, and intelligent data operations process. Links: Tutorial: Host an MCP server on Azure Functions | Microsoft Learn Quickstart: Set up Microsoft Foundry resources - Microsoft Foundry | Microsoft Learn Quickstart: Connect Azure Data Explorer to an Azure Synapse Analytics workspace - Azure Synapse Analytics | Microsoft Learn Would love to hear your Feedback: Sally Dabbah | LinkedIn275Views0likes0Comments