azure
7978 TopicsSecure Medallion Architecture Pattern on Azure Databricks (Part II)
Disclaimer: The views in this article are my own and do not represent Microsoft or Databricks. This article is part of a series focused on deploying a secure Medallion Architecture. The series follows a top-down approach , beginning with a high-level architectural perspective and gradually drilling down into implementation details using repeatable, code. In this part we will discuss the implementation of the pattern using GitHub Copilot If you have missed, please read first the first part of this blog series. It can be found at: Secure Medallion Architecture Pattern on Azure Databricks (Part I). I waited a while before publishing this article. Partly due to other priorities, but also because I wanted to experiment with deploying infrastructure and data pipelines using agents. At that point, I was looking to leverage agents with a spec-driven approach, and through using GitHub Copilot, I learned what skills are and how I can use them to achieve my scope. In this blog I'll share what I learned using GitHub Copilot for spec-driven development. I'll use the content from my previous article, Secure Medallion Architecture Pattern on Azure Databricks (Part I) , as a technical specification to extract implementation details and generate two outputs: Terraform code for infrastructure, platform configuration, and deployment Databricks Declarative Automation Bundles for jobs, pipelines, and other deployment-ready workload resources I've tried not to overfit the prompts within the skills I've developed, so they remain portable to other technical articles, not just the one mentioned in this blog. Separate the platform from the workload When I started the design, I decided to modularise the automation scripts by separating the platform from the actual data platform workloads. I assigned networking, storage, identities, secret scopes, and workspace configuration to Terraform, while Databricks notebook runs, job clusters, pipelines, and environment-specific deployments were developed within Databricks Declarative Automation Bundles (formerly known as Databricks Asset Bundles). That may sound obvious, but it's exactly where generated code often goes wrong. Without explicit instructions, AI tools tend to blur these boundaries and produce one oversized block of configuration. That's why my Copilot skill needs to enforce a clear contract by: Infer the architecture from the article Identify what is explicit and what is assumed Emit Terraform only for infrastructure concerns Emit bundle files only for workload concerns Leave placeholders for anything the article does not specify That last point is critical. A blog post or low-level technical specification is not a source of truth for account IDs, hostnames, catalog names, secret values, or subnet IDs. Good automation should never fabricate those values. Instead, I decided to produce a starter implementation with TODO markers wherever environment-specific values are required. Skills are a great way to get more consistent, repeatable output across runs, so I decided to use them for this project. I could have used one of the tools listed in the table below, but I chose to go my own way, into developing a Spec-Driven Development (SDD) framework which I hope it will carryon improve with time. Tool Creator Type Link Description GitHub Spec Kit GitHub Open source github/spec-kit Turns feature ideas into specs, plans, and task lists before any code is written. Works with multiple AI coding agents. Specification first, code as generated output. BMAD Method BMad Code LLC Open source bmad-code-org/BMAD-METHOD An AI-driven agile framework with specialised agents covering the full lifecycle from ideation to deployment. Scale-adaptive — adjusts planning depth from a bug fix to an enterprise system. OpenSpec Fission AI Open source Fission-AI/OpenSpec Lightweight spec layer that sits above your existing AI tools. Each change gets a proposal, specs, design, and task list. No rigid phase gates, no IDE lock-in. What are skills, and why are they a good fit? Skills are essentially reusable prompt modules that aim to force LLMs to produce repeatable answers. Within a skill, I define the behavior and then attach supporting resources or scripts so Copilot can perform the task consistently. That means a skill can do more than just "write some code." A skill can define a repeatable workflow like this: Fetch the blog URL Extract headings, paragraphs, and code snippets Normalize the article into a lightweight implementation spec Decide what belongs in Terraform Decide what belongs in the Databricks bundle Generate files in a predictable project structure Produce a TODO.md file for unresolved values This approach turns Copilot from a generic assistant into a specialized code-conversion tool. However, there are some constraints I had to be mindful of when developing skills: Context window limits. The model has limited space to read instructions, process input, and generate output. Long prompts can cause files to be cut off or steps to be skipped. Non-determinism. Output may vary between runs, even with strict instructions. I always lint, validate, and review the diff before committing. Boundary leakage. Models may invent plausible but incorrect values. The TODO.md pattern must be enforced as a rule, not a suggestion. Model and tool drift. Copilot's model and tool surface change over time. I use example inputs and outputs as repeatable sanity checks. Maintainability. A skill is code-as-prompt and will age with the platforms it targets. I keep skills narrowly scoped so they stay easy to update. I'll explain the TODO.md file in more detail later in this post. The GitHub repo The repository can be found at the link MarcoScagliola/CopilotBlogToCode Below you will find a function I have added that, when invoked, deletes all the files produced by the skills, so you can test the repo from a clean state. python .github/skills/blog-to-databricks-iac/scripts/reset_generated.py --force; If you want to tried it out, please clone and try it on your copy. In GitHub Copilot, I usually keep: Model as Auto Foer the configure tools I keep just the built-in tools selected. belo you can find the prompt that I use to have the blog analysed. Use the blog-to-databricks-iac skill on this article: https://techcommunity.microsoft.com/blog/analyticsonazure/secure-medallion-architecture-pattern-on-azure-databricks-part-i/4459268 Inputs: workload: blg environment: dev azure_region: uksouth layer_sp_mode: create github_environment: Choosing the layer_sp_mode layer_sp_mode controls how the Bronze, Silver, and Gold layer identities are provisioned. Pick the value based on your tenant's permission model. The same value must be set in both the GitHub Copilot prompt that generates the code and the GitHub workflow input when dispatching the deploy. Obviously, the two must be the same for the same run. create In create mode, Terraform registers one App Registration per layer and assigns the matching RBAC roles automatically. Use this when the deployment principal has Application.ReadWrite.All on Microsoft Graph and a directory administrator has admin-consented to it. If the deploy fails with Authorization_RequestDenied against Microsoft Graph, the permission isn't consented in your tenant , then switch to existing. existing In existing mode, the pipeline reuses Service Principals you supply, so no Entra ID directory permissions are required. The deployment principal still needs Contributor and User Access Administrator at subscription scope. Those are Azure RBAC, not directory permissions, and they're needed in either mode. Use this in restricted tenants, or anywhere App Registration creation is blocked by policy. You provide two values: the client ID and the Service Principal object ID. The current implementation reuses this one identity across all three layers, which weakens the per-layer isolation that the article describes (see SPEC.md for what that costs). The object ID must come from Entra ID → Enterprise Applications → your app → Object ID , not from App Registrations. Both are UUIDs and look identical. The wrong one will cause RBAC assignments to fail with PrincipalNotFound , often deep in the apply where the failure is harder to diagnose. Following you will find the exact repository setup I used for this workflow, starting with my initial configuration and ending with the final directory structure and files. 1. Create a new GitHub repository and clone it locally I started by creating a new repository on GitHub, then cloned it to my local machine so I could add the Copilot skill, Terraform scaffolding, and Databricks bundle files in a centralized location. git clone https://github.com/YOUR-ORG/blog-to-databricks-iac.git cd blog-to-databricks-iac This approach keeps the workflow organised from the start: the repository exists on GitHub first, and the local clone becomes the working directory for all subsequent setup steps. 2. Create the GitHub skill folder structure (first iteration) GitHub Copilot skills are file-based and centered on a SKILL.md file inside a skill folder. GitHub's current pattern places these under .github/skills/ . I used the script below to create the folder hierarchy for my initial integration. mkdir -p .github/skills/blog-to-databricks-iac/scripts mkdir -p .github/skills/blog-to-databricks-iac/templates mkdir -p infra/terraform mkdir -p databricks-bundle/resources mkdir -p databricks-bundle/src This script generates the structure depicted below. 3. Add the main skill definition Next, I created the SKILL.md file at .github/skills/blog-to-databricks-iac/ . The orchestrator decides what happens and in what order, while each specialist decides what its own file should contain (as an example the Terraform specialist owns the Terraform, the bundle specialist owns the bundle, and so on). In practice, SKILL.md turns Copilot from a general assistant into a domain-specific generator for this repo. GitHub documents this SKILL.md-based structure as the foundation of agent skills. My first iteration of .github/skills/blog-to-databricks-iac/SKILL.md> was very simple and can be found here. 4. Add a script to fetch and normalize the blog article Next, I created a Python script that the main orchestrator SKILL.md invokes to read the blog article. This script is stored at .github/skills/blog-to-databricks-iac/scripts/ and named fetch_blog.py . Within SKILL.md , the script is invoked as shown below. ### 1. Fetch article ```bash python .github/skills/blog-to-databricks-iac/scripts/fetch_blog.py "<url>" ``` If fetch fails, stop and return the fetch error output. Do not retry; surface the error to the user and wait for guidance.</url> The script validates the URL, fetches the HTML with a 30-second timeout, and uses a spoofed Mozilla User-Agent to avoid being blocked by CDNs (Content Delivery Networks). It reads through the HTML one tag at a time, flagging when it enters relevant sections like paragraphs, headings, or code blocks, and buffering text until the tag closes. Before storing anything, it cleans the text by decoding HTML objects, collapsing whitespace, and trimming edges. As it parses, the script also scans for cloud platform keywords: AWS, S3, Azure, ADLS, GCP, Google Cloud. The first match wins; if none are found, it returns unknown. This is a quick heuristic, not authoritative. Finally, it outputs clean JSON with the extracted data: title, headings, paragraphs, code blocks, and cloud hint, capped at reasonable sizes to keep the output manageable. If anything goes wrong, such as a network error, timeout, bad HTML, or empty content, the script exits cleanly with a structured error message, making it easy to integrate into larger workflows without surprises. The Python scrip can be found here. 5. The output and output contract Now I needed to think about the output I wanted GitHub Copilot to deliver through the skills. To reiterate, I needed the following: File Name Description README.md This is the operator-facing runbook that turns the generated artifacts into a working deployment. It contains no unresolved placeholders and no embedded credentials. The header summarizes the architecture and links back to the source blog. A prerequisites section lists required Azure access, Entra permissions, GitHub Environment setup, and local CLI versions. It includes tables of always-required GitHub secrets and variables, plus conditional ones based on deployment mode. Step-by-step numbered sections walk through bootstrapping the deployment principal and populating the GitHub Environment. Workflow blocks describe each Terraform validation, infrastructure deployment, and DAB deployment step, including file paths, triggers, and outputs. A commands section lists the exact Terraform and Databricks bundle sequences to run. Finally, assumption notes point the operator to TODO.md and SPEC.md for context. TODO.md The operator's checklist of remaining tasks. It uses a strict five-section format (Heading, What this is, Why deferred, Source, Resolution, Done looks like) with no commands or code, only concepts and decisions. Each section captures a different layer of post-deployment work, pre-deployment tasks like RBAC roles and GitHub secrets, deployment-time inputs like region and environment, post-infrastructure setup like Key Vault secrets and external locations, post-DAB work like Unity Catalog grants and job schedules, and architectural choices the orchestrator couldn't make (network posture, schemas, partitioning). Every entry comes from something the article left unstated, plus the universal post-deploy work for any Databricks deployment. The operator works through TODO.md sequentially, resolving each item before the system is production-ready. SPEC.md The structured, source-faithful read of the blog article, organized by checklist. Every item is marked as a stated value, inferred from code or diagrams, or "not stated in article." It includes architecture details, Azure services configuration, Databricks setup, data model, security and identity requirements, and observations. SPEC.md is the single source of truth that Terraform and DAB generators read from, TODO.md is populated from every "not stated" entry, and README.md references it for assumptions. This ensures the deployment is built on documented decisions, not hidden assumptions. Together, these files create a clear boundary: SPEC.md answers what the blog says, TODO.md captures what's missing or must be decided, README.md tells you exactly how to deploy. This split is enforced by validation rules that fail if any content duplicates across the three files. To make these files as repeatable as possible, I needed two things: Two templates, one for README.md and one for TODO.md , that the orchestrator fills in from SPEC.md at generation time. A broader delivery contract, output-contract.md , which lists the five files the orchestrator must produce. README.md and TODO.md are two of those five, and the templates are how they get produced. The output-contract.md file defines a strict, ordered format that the agent must follow when transforming a blog article about Databricks-on-Azure architecture into a runnable repository. The first commit was deliberately minimal, as you can see from the file available here. No leaf-skill routing, no repo-context.md, no GitHub Actions workflows, no validation rules, no entry-field templates for TODO.md . That commit's single job was to lock down the shape of the output: what gets produced and in what order. Every commit since has refined how to produce that shape without changing what gets produced. Putting the contract in the very first commit gave every later change a fixed reference point. Every leaf skill, generator script, and validation rule I've added since has fit into one of its five sections. The pipeline has changed; the deliverables haven't. The structure of the GitHub repo at commit 17ab443 can be see in the pictorial below. 6. The README.md and TODO.md templates After iteratively working on the orchestrator, a clear pattern emerged, the code-generation paths were kind of stable, but the documentation outputs weren't. Every run produced README.md and TODO.md from scratch in free-form Markdown. Across runs, the same content kept drifting. Section ordering changed between runs and the explanation of GitHub Environments was rewritten with subtle wording differences. RBAC roles appeared sometimes as lists, sometimes in prose, sometimes split across sections. Universal post-deploy actions (create the secret scope, populate the vault, set up Unity Catalog grants) were re-derived every time, occasionally with steps missing. The root cause was that the orchestrator was treating durable, universal content as if it were per-run content. So I've decided to add two templates: README.md.template and TODO.md.template. Templates separate universal content (RBAC, TODO sections, GitHub setup) in the template from per-workload content (catalog names, credentials) substituted from SPEC.md. This delivers consistency across runs. The README and TODO are structurally identical, so readers can navigate them intuitively. Universal content is correct by construction; I write it once, review carefully, and every run inherits that quality. Validation also becomes more precise, and the agent's job shrinks from open-ended writing to mechanical substitution, which is easier to validate and maintain. Templates introduce clear vocabulary: {placeholder} is filled by the orchestrator at generation time, by the deployer at run time. Finally, templates enforce traceability: every "not stated in article" entry in SPEC.md automatically becomes a TODO entry via the from SPEC.md slot, making this an automatically-enforced rule. I'm invoking the templates in the orchestrator as shown below. The Git commit with this code can be found at this link. ### 3.1 Generate README from template Load the template: `.github/skills/blog-to-databricks-iac/templates/README.md.template` ### 3.2 Generate TODO from template Load the template: `.github/skills/blog-to-databricks-iac/templates/TODO.md.template` 7. The output of the fetch_blog.py file and the interaction with the orchestrator When the orchestrator invokes fetch_blog.py , the script produces a JSON output and passes it back to the orchestrator. The orchestrator then reads the JSON document into its working context and maps each field onto an analysis checklist. The title and meta description establish the article identity and scope. Headings with their levels reveal the structure, helping the agent locate sections about architecture, security, data flow, and naming. Paragraphs provide evidence for stated values like regions, resource types, and RBAC models. Code blocks become the source of inferred values. As an example, a Terraform snippet might reveal SKU choices or naming patterns not mentioned in the text. These inferred values get tagged "inferred from code snippet" when recorded. The cloud hint acts as a sanity check that the article actually describes an Azure architecture. For every checklist item, the agent records either an extracted value or the literal string "not stated in article". This becomes SPEC.md , the single source of truth for everything downstream. SPEC.md drives every subsequent step. Steps 3 through 7 (the Terraform module, workflows, and Databricks bundle generators) read architectural decisions from it. Step 8 then produces TODO.md by converting every "not stated in article" entry into a TODO item the operator must resolve before deployment. What I find worth pointing out is how little the output contract has actually moved since that very first commit. The implementation underneath has changed completely. Leaf skills emerged, generator scripts came in, validation rules got added, a soft-delete state machine showed up to handle Key Vault recovery. None of those existed at the start. But what the orchestrator delivers, the list of files it puts on disk, has stayed exactly the same. We have a much larger SKILL.md today that still mirrors the initial five-item output list. The contract itself has changed by exactly one line: the addition of "Design of the architecture" to section 5. SPEC.md : the structured, source-faithful read of the article, organised by the analysis checklist ( link ) TODO.md : the operator's checklist of everything the article didn't specify, plus the universal post-deploy actions ( link ) Terraform code under infra/terraform/ : the platform layer with networking, storage, identities, Key Vault, workspace ( link ) Databricks Asset Bundle under databricks-bundle/ : the workload layer with jobs, entry points, environment configuration ( link ) README.md : the operator runbook, with the architecture design diagram embedded ( link ) If the JSON contains an error, the orchestrator stops immediately. Per the skill rule "If fetch fails, stop and return the fetch error output. Do not retry," the error surfaces to the user rather than propagating downstream. So the script's output is the raw evidence pack: title, structure, prose, code, cloud hint. The agent uses it to fill the architecture spec, which parameterises every generated artifact. At this point the fetch_blog.py output is sent to Step 2 of the orchestrator, as shown in the code snippet below. ### 2. Analyse article Analyse the fetched article against the structured checklist in `.github/skills/blog-to-databricks-iac/references/blog-analysis-checklist.md`. The analysis covers the article text, diagrams, screenshots, and code snippets. And, much later in the orchestrator, Step 8 closes the loop by turning everything that's been recorded into the two operator-facing documents: ### 8. Generate README and TODO from templates Use the templates in `.github/skills/blog-to-databricks-iac/templates/`: - `README.md.template` -> `README.md` - `TODO.md.template` -> `TODO.md` 8. How this actually came together What I've described so far is how the orchestrator works currently. The reality of building it was much cumbersome , but also fun. I got from the first version to the current one by iterating. Rerun the orchestrator, find the defect, identify the rule that would have caught it, add the rule to the skill that owns the artifact, rerun. The reason I'm calling this out now, before walking through the rest of the pipeline, is that everything from this point on is a story about a specific lesson learned that way. The leaf skills exist because a single SKILL.md got too dense. The restricted-tenant guardrails exist because the deployment failed against a tenant that couldn't read Microsoft Graph. The validation harness exists because prose rules weren't catching the regressions that mattered. The soft-delete state machine exists because the same vault name kept colliding with a previous deploy. None of these rules were present from day-one. So in the next sections I'll walk through how the pipeline actually matured: how the single skill split into a graph, what the inner regenerate-fix loop felt like in practice, the day the project pivoted to support restricted tenants, the bugs that became rules, and the Key Vault soft-delete state machine that closed the project out. 9. From a single skill to a skill graph When I started, everything lived inside a single SKILL.md . It was simpler that way, and to be honest, at that point I didn't yet know which rules would actually matter. But as I kept rerunning the orchestrator on the article, a pattern emerged. Each rerun produced something that broke in a slightly different way, and the fix always belonged to a very specific concern: Terraform authoring, bundle structure, workflow generation, or the orchestration logic itself. Stuffing the rules for all of them into one file was making the orchestrator unreadable and, worse, was silently dropping rules when the context window got tight. So I split it. The orchestrator stayed at the top, kept routing the work and validating the result, and each concern got promoted to its own leaf skill. The Databricks bundle skill itself ended up needing one more split a few days later, it had got too dense, so I broke it into two leaves: databricks-yml-authoring ( link ) Python-entrypoints ( link ) The diagram below shows the shape the repo has today. The orchestrator now does almost no authoring. It owns the sequence of steps, the contract, and the validation gates, while everything else is delegated. This was the single biggest readability win. I wish I'd done it earlier. The REPO_CONTEXT.md is one extra node in that diagram that I want to call out But I'll come back to later in section 12. 10. The inner loop: rerun, fail, fix the skill If I had to describe the middle of this project in one sentence, it would be: every commit was a regeneration. I'd run the orchestrator end-to-end against the article, inspect the generated Terraform, the bundle, the workflows. I'd find a defect, identify the rule that would have prevented it, add that rule to the skill that owns the artifact, then rerun. As shown in the image below. This loop is what I think people miss when they treat AI-generated infrastructure code as a one-shot. The first run is never the deliverable. The deliverable is the skill that produces good runs. The generated files are disposable and can always be reproduced. The skill is what carries the knowledge forward. I had to actively resist the temptation to fix bugs in the generated code directly. Patching infra/terraform/main.tf by hand fixes today's run but not tomorrow's, because the rule that would prevent the bug doesn't exist anywhere. So I made it a discipline: never edit the output, always edit the skill, then regenerate. 11. Restricted-tenant compatibility The bug was simple to describe and brutal to fix: the deployment principal in the target tenant couldn't read Microsoft Graph. Any Terraform data source that resolved an Entra name to an object ID at plan time (e.g., azuread_user , azuread_group , azuread_service_principal ) blew up at terraform plan. My first instinct was to think "I just give the principal Graph permissions". But in a lot of real environments this is not possible. The principal that runs your IaC is governed by a security team, the team has a policy, and the policy says no Graph reads. The pivot was getting the skill to produce Terraform that never reads Graph. Object IDs are inputs, not lookups. They come in as trusted secrets, the workflow exports them as TF_VAR_* , and Terraform consumes them as variables. No data " azuread_* " block is allowed in the generated code, ever. I thought this was a simple fix. It wasn't. It cascaded into about six other things: App Registration vs Service Principal object IDs. The workflow was being given the wrong one. Role assignments need the Enterprise Application (Service Principal) object ID, not the App Registration object ID. The two are different objects in Entra with different IDs. I encoded the distinction in the skill as *_SP_OBJECT_ID (the Service Principal) versus *_CLIENT_ID (the App Registration's application ID). Naming carries the meaning now, so the wrong value is hard to pass. Single-principal mapping. In some tenants you only have one principal and it has to play both deployment and runtime roles. The skill grew a layer_sp_mode = existing input so the generator stops trying to create a new Service Principal and reuses the deployment one instead. Key Vault access policies, gone. Access policies were Graph-touching, and not all tenants support them anyway. The skill switched fully to RBAC role assignments (Key Vault Secrets User, and so on). A few cascading bugs followed, but this was the right call. It took some time to harden the Terraform skill against everything the restricted tenant was throwing back. Each iterations had the same shape, each orchestrator runs, hits a fresh provider error, I add the rule, run again, hit the next one. The commit subjects from that run are basically a transcript of the conversation I was having with the platform. 12. The bugs that became rules There are three bugs that I believe are worth telling the story of, because they each illustrate a slightly different lesson. The HCL trim() arity bug. The generator emitted trim(var.something) in a validation block. HCL's trim() takes two arguments, not one. The function I actually wanted was trimspace() . This is the kind of bug that any human would catch in a code review in two seconds, and which the model produced confidently because the shape of the call looked right. I added the rule to the Terraform skill ("for whitespace trimming use trimspace, never trim") and the bug never came back. Lesson: even for trivial syntactic mistakes, the fix belongs in the skill. The variable shadowing bug. The deploy workflow had a job-level env: block that set TF_VAR_key_vault_recover_soft_deleted to a static value. A detection step earlier in the workflow was supposed to compute the right value at runtime and write it via $GITHUB_ENV . The problem is that GitHub Actions resolves job-level environment variables before $GITHUB_ENV writes take effect, so the static value always won and the dynamic one was silently ignored. The fix was to never set the recovery flag at job level. It must be written in the detection step, on every code path, including the trivial "no recovery needed" path. Lesson: state must be explicit, not inherited. If a flag has three possible meanings, three code paths must each write it. The hardcoded -platform suffix. The workflow had a shell-side suffix that someone (let's be honest, the model) had invented to make the resource group name "look right". When recovery logic started running and the workflow looked for the canonical resource group, it looked for -platform instead of whatever the Terraform locals.tf actually emitted. The result was that the recovery handler was happily reaching past the real resource group and into a different one. I made it a rule in the orchestrator: workflow-invented suffixes are not permitted. Naming is owned by Terraform's locals.tf . There are seventeen more defects in the catalogue, and the pattern is the same in every case. The bug surfaces, the rule gets written, the rule lives in the skill that owns the affected artifact. There is no implementation-learnings.md in the repo. There used to be, but I've deleted it because a tracked log of past bugs, sitting next to a skill that's already supposed to encode the lessons from those bugs, is a duplication waiting to drift. I believe that if the rule is in the skill, the log is redundant. If the rule isn't in the skill, the log is an evidence that I haven't finished the work. Either way, the right place for bug history is git log. 13. Splitting "the skill" from "this repo's defaults" I then wanted the orchestrator to be portable, but every run kept needing the same handful of decisions. Which Azure region by default? Which environment names? Which catalog naming convention? These weren't part of the article. They weren't part of the Terraform skill either. They were specific to this repository's opinion about how things should be deployed. If I baked them into the orchestrator, the orchestrator stopped being portable. If I left them out, every run produced unhelpful "not stated in article" entries for the same five universal decisions. The answer was a new file called REPO_CONTEXT.md stored in the repo root. It's read by the orchestrator before generation and it carries the defaults that are owned by the repo, not by the skill. The split looks like this in practice: SKILL.md answers the question "how do I turn an article into a runnable repo?" It is portable. REPO_CONTEXT.md answers the question "what does this repo default to when the article doesn't say?" It is local. Cloning the orchestrator into another GitHub project is now a clean operation. You take the skill, you write your own REPO_CONTEXT.md , and the same generator produces output appropriate to your environment. 14. The Validations Most of the rules I'd written into the skills were prose. "Don't invent suffixes." "Object IDs are inputs, not lookups." "Every required Terraform variable must have a matching TF_VAR_* in the workflow." The model is good at following prose rules most of the time. So a few of the most regression-prone rules became executable. The most important one is scripts/validate_workflow_parity.sh . Every variable declared in infra/terraform/variables.tf must appear as a TF_VAR_* export in the deploy workflow. The script greps both files, diffs the sets, and exits non-zero if they don't match. It is run at the end of generation. If it fails, the run failed, even if everything else looks fine. This caught real bugs. The most embarrassing was a variable I'd added to variables.tf and forgot to wire through the workflow. Terraform plan would prompt interactively for it on a non-interactive runner, and the run would hang. The rule of thumb I've ended up with is: prose rules are the default, but if a rule has been violated more than twice, it gets promoted to an executable check. There's a short list of those checks now, and it's the load-bearing one. 15. Key Vault soft-delete state machine Key Vaults in Azure have soft delete on by default. When you delete a vault, it sticks around for ninety days in a "soft-deleted" state. If you try to create a vault with the same name in the same subscription during that window, the deploy fails. The right behaviour is to recover the soft-deleted vault, not create a new one. The first version of my recovery handler covered exactly one case: if the vault is soft-deleted, recover it. This worked the first time I ran it. The second time, the recovered vault came back into the previous resource group, not the new one I had just created. Terraform then tried to create a new vault in the correct resource group and failed because the name was already taken globally. The handler had no concept of "the recovered vault is in the wrong resource group." So I added that case. The third time, the previous resource group itself was gone, and the handler was looking for it to verify the move. So I added that case too. By the end, the state machine had three distinct cases and two preconditions, as shown in the diagram below. The reason I keep coming back to this state machine is that it captures something that I think is generally true about agent-generated infrastructure code. The happy path is easy and meaningless, while the value is in the failure modes. The first version that worked on a clean tenant was about ten lines of bash. The version that works on a tenant that has been deployed-into and partially-torn-down five times is six times longer, and every additional line of it corresponds to a real environmental condition that I had to learn the hard way. 16. What I've learned so far I'm not going to pretend the full list of principles below was clear to me on day one. Every single one of these was learned by getting it wrong first. Looking back at the history, though, they are the ones that survived contact with reality. The contract precedes the implementation. output-contract.md was committed before any generator existed. Locking the shape of the deliverable first meant every later change had a fixed reference point. Generators, not stencils. Workflows are produced by Python scripts that take parameters and emit YAML. When restricted-tenant logic and the soft-delete state machine arrived, they needed conditional structure that a static template can't express. Every bug becomes a rule. Patching the generated code is a tax on tomorrow's run. While patching the skill is an investment. Each concern has a clear owner. The orchestrator routes, the leaves author, and the repo context holds the local defaults. Restricted-tenant compatibility is non-negotiable. No Microsoft Graph reads from generated Terraform. Object IDs are trusted inputs. Single-principal mapping is supported. Naming is owned by Terraform. No suffixes invented in shell. The validation harness enforces this. State must be explicit, not inherited. Every workflow run writes its own flags. No reliance on env defaults from a previous step or a previous run. Validation is executable when a rule has been violated more than twice. Prose rules are the default. Promotion to a script is earned. Operator docs describe concepts, not commands. Command syntax ages out, while conceptual descriptions don't. The TODO template enforces this rule. End-to-end runs against dirty tenants are the truth. The acceptance test isn't a clean-room deploy. It's a deploy into a tenant that has soft-deleted vaults, lingering RGs, and existing role assignments. Until that works, the project isn't done. From time to time, skills need to be reviewed and consolidated. The summary above of the journey is the one I find most useful to share when people ask whether this approach actually goes anywhere. From an empty repo to a generator that produces a deployable, restricted-tenant-compatible infrastructure-as-code repository from a blog URL, with executable validation and a recovery state machine that survives a previously-deployed environment. The first commit was an empty workspace. The last commit was the one where the same orchestrator, run against the same blog, against a tenant carrying state from five previous runs, deployed cleanly with no manual intervention. That is what I what I was aiming to achieve when I started! Thanks for reading.271Views0likes0CommentsAzure Arc | On-prem + Multi-cloud Management
In this video, we explore how Azure Arc simplifies hybrid and multi-cloud operations by providing a single, consistent control plane for managing your entire infrastructure across Linux and Windows, on-prem, in Azure, or in any cloud. Once connected, you can patch Windows and Linux together with Azure Update Manager, enforce CIS benchmarks and Azure Security Baselines through Azure Policy, and pull consistent inventory, tags, and RBAC across your whole estate. Auto-recover unbootable Windows Server 2025 machines with Quick Machine Recovery, audit and configure WinRE using built-in Azure Policy. Run your virtual machines as Azure Virtual Desktop session hosts on Nutanix, VMware, Hyper-V, or using physical Windows hardware. Satya Vel, Azure Arc Principal Group PDM Manager, shares how to make Azure your operational standard for every workload, anywhere it runs. Learn more about Azure Arc at https://aka.ms/AzureArcServer, or join the community at https://aka.ms/ArcServerForumSignup Organize, filter, & manage inventory at scale. Centralize visibility into servers, VMs, and Kubernetes clusters across on‑prem, AWS, GCP, and Azure from a single control plane. Check out Azure Arc. Policy-as-code, everywhere your servers run. Azure Arc extends Azure Policy to on-prem, AWS, and GCP resources — pre-built CIS and security baselines included. Try it. AVD, off-Azure. Azure Virtual Desktop for hybrid environments turns any Azure Arc-enabled Windows VM or physical server into a session host. Get started. QUICK LINKS: 00:00 — Azure Arc in hybrid environments 00:46 — Transitioning to Azure Arc 02:35 — Unified management 03:43 — How to bring in servers and containers 04:48 — Inventory management 05:30 — Patching 06:48 — Auto-manage future updates 08:25 — One-time update 09:32 — Configuration in a hybrid environment 11:05 — Auditing Windows machines 11:34 — Microsoft Defender for Cloud 13:06 — Desktop virtualization 13:51 — Wrap up Link References For more information go to https://aka.ms/AzureArc Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: - If you’re managing servers and containers today, you’re probably operating across on-prem multiple clouds and using different tools for each. Azure Arc changes that by providing a single way to manage servers, Kubernetes, and containers across Linux and Windows, on-prem, in any cloud, and at the edge. Since launching in 2019, Azure Arc has gained strong momentum, enabling consistent patching, configuration, compliance, and advanced resilience features like remote recovery even for machines that cannot boot and more. And to explore how Azure Arc works in real hybrid environments, I’m joined by our resident management expert, Satya Vel. Welcome. - Hi, Jeremy. It’s great to be on the show. It’s been a while. - Yeah, it has been a while. Thanks for joining us today. And why don’t we jump right into this? So if I’m coming from maybe a traditional server management background using things like Ansible, VMware vSphere, maybe System Center, what does it take then to transition to Azure Arc, and why would I do it and is it worth the effort? - That’s a fair question. Those are all proven powerful tools. That said, it’s challenging moving between multiple tools to manage what you have. What we are seeing today is more of a people and process change. Most enterprises are now hybrid by default, on-prem, multi-cloud, multiple operating systems managed by a central operations team. And what those teams want most is consistency. Azure extends its management capabilities to servers and Kubernetes clusters wherever they run using Azure Arc. That’s where the value of cloud native innovation shows up, beyond basic monitoring of servers and clusters, like the health and status of each resource. With Azure Arc, you can collect richer operational and security data and query it at a massive scale. All these are now actionable insights. You can use them to improve your security posture to close vulnerabilities faster. They’ll let you more easily fix compliance drift to realign resources with your policies and maintain day-to-day operations. This includes modern patching, all applied across your multi-cloud and hybrid estate. And finally, Azure Arc centralizes governance by bringing consistent tags for grouping along with unified identity and access management using RBAC for connected resources. That way everything is controlled the same way regardless of where it runs from a single control plane without duplication or drift. So to answer your earlier question, it is totally worth it, and Azure Arc is really the glue that brings it all together. - Okay, so why don’t we make this real for everyone watching? Can you show us the unified management experience and what that looks like with Azure Arc? - Sure thing, and that’s the best part. In fact here I’m managing my on-prem and multi-cloud environment using Azure services enabled by Azure Arc. Notice I have everything from a Windows server to Kubernetes clusters running on AWS, different Linux distros. There’s even a Windows client Desktop VM and more. All right here. And I can drill into any of these items to see its specs as well as what’s configured. I can take a look at whether it’s compliant with my configuration policies. For example, this test resource has a few non-compliant policies that I might want to take a look into. And the great thing is everything is in one spot. I don’t need to move between consoles to see everything. Once these resources are enrolled, everything is automated and rule-based. I can look for servers and workloads as they are provisioned or updated, and monitor them 24/7. Then based on the configuration status it finds, it can take actions and get items into a compliant state. - Okay, so we’re going to get to what the management experiences look like in a minute, but let’s go back a step. So what happens if I’ve got infrastructure and I want to bring that into Azure Arc? What does that experience look? - This process is super straightforward and simple. Let me show you. You can bring servers and containers running in any cloud on-premises and on any hypervisor under management with Azure Arc. To onboard resources to Azure Arc, we have a few different methods. The any environment option is the most flexible, where you can use scripts for Linux and Windows, or an installer. This is a lightweight agent that you can install on your Linux and Windows servers. You can use your preferred deployment method to run the scripts on your servers and clusters, like this one for Linux, which downloads the agent, installs it and connects it to Azure Arc. And if you have existing tools like Ansible Automation Controller, formerly known as Ansible Tower, we have published a playbook that makes it super simple to onboard your machines. And this playbook is published in the Ansible Galaxy, which is the official community hub. - Okay, so now we’ve got everything in. Now moving into the next thing that people manage a lot every day, inventory. So how does Azure Arc change that? - So I briefly showed the different locations and platforms that could run under Azure Arc. But there’s more to it. All my servers and clusters are in one view. It spans on-prem as I search for Azure Local, then I’ll filter for AWS as well as GCP services. And I can see Azure VMs plus my on-prem servers listed together with a consistent tagging and status information. I define everything based on their location and platforms in Azure, so it’s super easy to see where everything is running, and there’s less chance that any infrastructure falls through the cracks. - Beyond inventory management, something else that we do every day is patch management. So can Azure ARC handle patch management for servers and infrastructure outside of Azure? - Absolutely. This is an area where Azure Arc can help a lot. Today, patching often means different tools for different environments: WSUS or SCCM for Windows, scripts for Linux, or separate crowd portals. And with Azure Arc, this all happens consistently from one place. You can see Azure Update Manager, which I have opened here. Each server has an update status indicating if it’s got pending updates or not. Azure Update Manager continuously assesses the update compliance of your managed servers on a schedule. And you can manually trigger assessments by selecting resources and hitting check for updates. Now, you can see I have both Linux and Windows machines missing updates, and even though these are different OS types, I can update them together with just a few clicks if I want. But before I do that, notice this on-prem Windows Server 2016 machine that needs to be updated. Here, a benefit of managing your Windows and SQL Server infrastructure on Azure is that the service offers extended security updates so you can run them longer in support without disruption to business critical applications. Let’s get back to updating these machines. The nice thing is that you only have to set the right policy and logic one time to manage updates automatically in the future. To save a little time, I’ll select every machine. From here, I can schedule updates for these resources where first I’ll fill in the basics for my subscription and resource group. Then the instance details like the configuration name and the region. The maintenance scope using the guest option lets me target my resources. Then under schedule, I can select the start date as well as the time, how many hours and minutes I want the maintenance window to be, the frequency of repeats in hours, days, weeks, or months. Then in the resources tab, if I want to add more servers, I can group everything I want in the same maintenance schedule. Likewise, you’d use this grouping for staggered rollouts. Importantly, using dynamic scopes, I can also make sure that any new resources are targeted as they come online based on defined filters like the resource groups they’re in, the resource types, locations, operating systems or tags. In updates, I can target the type of updates I want, for example, only critical and security updates. Finally, I can add pre and post events to run before and after the update, like redirecting an app to an informational page saying that the resource is being serviced and when it’ll be back online. Of course, I can tag this as well. And then I just need to review and click create. - And the favorite thing I just saw there was the dynamic scoping that you can apply as a set it and forget it setting basically. So what happens though, if I’ve got an update that’s really critical that I need to push out immediately, can I do that? - Not a problem. You can do that as well. For that, you’ll select one or more resources and choose one time updates so that it gets applied immediately. I just need to confirm the machines, then choose the update type or any exclusions that I want to define. I’ll keep everything in scope here. Then in properties I can determine the reboot behavior I want and maximum maintenance window time in minutes. From there, I can review and install. That will push the update to my selected servers, whether they are in the cloud or on-premise, so it’s one place to get resources into update compliance. And in case you want to stagger updates over a longer period of time for large patch management jobs, you can orchestrate updates using groups. - So the main thing is here you control the timing, like only patching during off hours and approvals and you get to decide which updates to apply, so it’s super flexible. Now, software updates are one type of configuration management, but what other types of configurations can you manage here? - Configuration management in hybrid environments is complex. You traditionally use group policy, desired state configuration or scripts for Windows, and then separate tools like Ansible, remote scripting or manual commands of SSH for Linux. All this can be done centrally from Azure Arc. It extends Azure policy to any resource. And you can use Microsoft provided built-in policy baselines covering common security requirements. For example, the security baseline contains best practices and controls that we’ve defined for cloud services running on Linux and Windows. And above that, you can also see CIS Benchmark policy, which is an internationally recognized standard spanning OS platforms used to protect against cyber attacks. I’ll apply this baseline, then I’ll choose the Red Hat Enterprise Linux 9 Benchmark. And searching across 300 CIS Benchmark policies, I’ll look for passwords. And there are 24 policies defined. And then for Firewall, you can see four more. And these are just a few examples that are pre-configured. So once you assign these to your resources, Azure continuously monitors each machine for compliance. So you can use policy as code across your entire state with Azure policy controls that automatically stay current as standards like CIS evolve. We also recently added the ability to audit and enable WinRE through Azure Arc, improving recoverability even for machines that can’t boot. As you can see, there are a couple of new policies for auditing machines that do not have WinRE enabled and configuring WinRE on Windows machine. With quick machine recovery on Windows Server 2025, that also means for broader issues with known fixes, we’ll automatically recover machines that are not bootable. - And that’s really a great resiliency option. But what about security, compliance, and configurations and assessments? Can we do something there? - For that, you can use Microsoft Defender for Cloud. This lets you standardize security agents and settings across machines and containers wherever they run. In the Defender portal, you can see that the same way Azure Resources spanned Azure, AWS, GCP, and other environments, those same resources are visible here too. Defender continuously assesses connected resources for security posture. This includes what I showed before in the Security Baseline and CIS Benchmark. It detects threats in real time with associated security alerts and how they are trending. You get a complete breakdown by compute with your virtual machines and their associated risks. And the same is true for your connected containers running in Kubernetes. If I move over to cloud assets here you can see all the virtual machines, Kubernetes clusters that we saw in Azure Arc. And clicking into any of these, like this Ubuntu VM will show me all of its details. Scrolling down, I get a view of its risk factors. And below that, you’ll see that this one has 82 risk-based recommendations to improve its security. - And one of the big upsides of Microsoft Defender is that shared visibility, so everything logs to the same place. So if you think about assumed breach, it means that you won’t have any blind spots then as attackers are moving laterally through your environment. So that means security teams, they see what you see. So why don’t we move on though to desktop virtualization. What can Azure Arc do to help me there? - Sure, Azure Arc unlocks the ability to run Azure Virtual Desktop, or AVD, for short, outside of Azure so it can run on your own infrastructure, either via Azure Local or something new we recently announced: Azure Virtual Desktop for hybrid environments. This means any existing on-prem server can be configured as a AVD session host as long as it’s attached to Azure Arc. The management is in the VM layer using a management extension. It’s flexible, and Nutanix AHV, VMware vSphere, Hyper-V, or physical Windows Server can work. So with Azure Arc, you have full control over the entire infrastructure’s lifecycle from inventory, configuration management and policy enforcement all from one place. And the good news is that if you own Software Assurance, you can access services enabled by Azure Arc as part of your license for inventory, configuration, and update management. - That was a great tour and update of Azure Arc. So thanks for joining us today, Satya. And if you want to learn more about Azure Arc and try it out for yourself, just go to aka.ms/AzureArc for more information. Or as an admin search for Arc, A-R-C, in the Azure Portal to get started. And keep watching Microsoft Mechanics for the latest updates. We’ll see you again soon.46Views0likes0CommentsMicrosoft Finland - Software Developing Companies monthly community series.
Tervetuloa jälleen mukaan Microsoftin webinaarisarjaan teknologiayrityksille! Microsoft Finlandin järjestämä Software Development monthly Community series on webinaarisarja, joka tarjoaa ohjelmistotaloille ajankohtaista tietoa, konkreettisia esimerkkejä ja strategisia näkemyksiä siitä, miten yhteistyö Microsoftin kanssa voi vauhdittaa kasvua ja avata uusia liiketoimintamahdollisuuksia. Sarja on suunnattu kaikenkokoisille ja eri kehitysvaiheissa oleville teknologiayrityksille - startupeista globaaleihin toimijoihin. Jokaisessa jaksossa pureudutaan käytännönläheisesti siihen, miten ohjelmistoyritykset voivat hyödyntää Microsoftin ekosysteemiä, teknologioita ja kumppanuusohjelmia omassa liiketoiminnassaan. Huom. Microsoft Software Developing Companies monthly community webinars -webinaarisarja järjestetään Cloud Champion -sivustolla, josta webinaarit ovat kätevästi saatavilla tallenteina pari tuntia live-lähetyksen jälkeen. Muistathan rekisteröityä Cloud Champion -alustalle ensimmäisellä kerralla, jonka jälkeen pääset aina sisältöön sekä tallenteisiin käsiksi. Pääset rekisteröitymään, "Register now"-kohdasta. Täytä tietosi ja valitse Distributor kohtaan - Other, mikäli et tiedä Microsoft-tukkurianne. Webinaarit: 29.5.2026 klo 09.00–09.30 Vibe Coding & Github Copilot Tervetuloa SDC Community Monthly ‑webinaariin. Vibe Coding ja GitHub Copilot muuttavat ohjelmistokehitystä, koodista tulee nopeammin syntyvää, iteratiivisempaa ja tiiviimmin liiketoimintaa tukevaa. Tässä sessiossa käymme läpi, mitä Vibe Coding käytännössä tarkoittaa ja miten GitHub Copilot toimii sen ytimessä. Näet, miten GitHub Copilot vauhdittaa kehitystä, parantaa tuottavuutta ja mahdollistaa nopeamman ideasta toteutukseen etenemisen. Näytämme käytännön esimerkkejä siitä, miten kehittäjät ja jopa ei-tekniset roolit voivat hyödyntää Copilotia modernissa kehitystyössä. Ilmoittaudu mukaan: Microsoft Finland – Software Developing Companies monthly community series – Vibe Coding & GitHub Copilot – Finland Cloud Champion Puhujat: Juha Karvonen, Sr Partner Tech Strategist Mikko Marttinen, Sr Partner Development Manager, Microsoft 24.4.2026 klo 09.00–09.30 Marketplace ja Resale Enabled Offers (REO) Tervetuloa SDC Community Monthly ‑webinaariin. Kasvua Azure Marketplace ‑kanavan kautta – tehokkaammin REO‑mallilla. Tämän kuukauden aiheena on REO (Resale Enabled Offers), ja käymme läpi, mitä REO käytännössä tarkoittaa, milloin sitä kannattaa käyttää ja mitä se muuttaa kumppaneille. Käsittelemme aihetta yhdessä Partner Solution Sales Manager Veli Myllylän kanssa. Opit, miten Resale Enabled Offers (REO) mahdollistaa kanavakumppaneiden myydä Marketplace‑tarjouksiasi puolestasi ja miten tämä vauhdittaa co‑sell‑kauppoja, skaalaa myyntiä ja tukee Azure‑kulutusta. Ilmoittaudu mukaan: Microsoft Finland – Software Developing Companies monthly community series – Marketplace ja Resale Enabled Offers (REO) – Finland Cloud Champion Puhujat Mikko Marttinen, Sr Partner Development Manager, Microsoft Veli Myllylä, Partner Solutions Sales Manager Microsoft, Microsoft 27.3.2026 klo 09:00-09:30 - Agent Factory Microsoft Foundryllä – miten rakennat ja viet AI-agentteja tuotantoon AI‑agentit ovat nopeasti nousemassa enterprise‑ohjelmistojen keskeiseksi rakennuspalikaksi, mutta monilla organisaatioilla haasteena on agenttien vieminen tuotantoon asti. Todellinen kilpailuetu syntyy siitä, miten agentit rakennetaan hallitusti, integroidaan osaksi kokonaisarkkitehtuuria ja skaalataan luotettavasti. Tässä webinaarissa käymme läpi ja näytämme käytännön demolla, miten AI-agentti rakennetaan Microsoft Foundry:n Agent Service -palvelulla. Näytämme miten agentin rooli ja ohjeet määritellään, miten agentille liitetään tietolähteitä ja työkaluja sekä katsomme miten tämä asemoituu Microsoft Agent Factoryyn. Katso nauhoite: Microsoft Finland – Software Developing Companies monthly community series: Agent Factory Microsoft Foundryllä – miten rakennat ja viet AI-agentteja tuotantoon – Finland Cloud Champion Puhujat: Mikko Marttinen, Sr Partner Development Manager, Microsoft Veli Myllylä, Partner Solutions Sales Manager, 27.2.2026 klo 09:00-09:30 - M-Files polku menestykseen yhdessä Microsoftin kanssa Mitä globaalin kumppanuuden rakentaminen M-Files:in ja Microsoft:in välillä on vaatinut – ja mitä hyötyä siitä on syntynyt? Tässä webinaarissa kuulet insiderit suoraan M-Filesin Kimmo Järvensivulta, Stategic Alliances Director: miten kumppanuus Microsoft kanssa on rakennettu, mitä matkalla on opittu ja miten yhteistyö on vauhdittanut kasvua. M-Files on älykäs tiedonhallinta-alusta, joka auttaa organisaatioita hallitsemaan dokumentteja ja tietoa metatiedon avulla sijainnista riippumatta. Se tehostaa tiedon löytämistä, parantaa vaatimustenmukaisuutta ja tukee modernia työtä Microsoft-ekosysteemissä. Tule kuulemaan, mitä menestyksekäs kumppanuus todella vaatii, ja miten siitä tehdään strateginen kilpailuetu. Katso nauhoite: Microsoft Finland – Software Developing Companies Monthly Community Series – M-Files polku menestykseen yhdessä Microsoftin kanssa – Finland Cloud Champion Asiantuntijat: Kimmi Järvensivu, Strategic Alliances Director, M-Files Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft 30.1.2026 klo 09:00-09:30 - Model Context Protocol (MCP)—avoin standardi, joka mullistaa AI-integraatiot Webinaarissa käymme läpi, mikä on Model Context Protocol (MCP), miten se mahdollistaa turvalliset ja skaalautuvat yhteydet AI‑mallien ja ulkoisten järjestelmien välillä ilman räätälöityä koodia, mikä on Microsoftin lähestyminen MCP‑protokollan hyödyntämiseen sekä miten softayritykset voivat hyödyntää MCP‑standardin tarjoamia liiketoimintamahdollisuuksia. Webinaarissa käymme läpi: Mikä MCP on ja miksi se on tärkeä nykyaikaisissa AI‑prosesseissa Kuinka MCP vähentää integraatioiden monimutkaisuutta ja nopeuttaa kehitystä Käytännön esimerkkejä Webiinarin asiaosuus käydään läpi englanniksi. Katso nauhoite: 30.1.2026 klo 09:00-09:30 – Model Context Protocol (MCP)—avoin standardi, joka mullistaa AI-integraatiot – Finland Cloud Champion Asiantuntijat: Massimo Caterino, Kumppaniteknologiastrategisti, Microsoft Europe North Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft 12.12. klo 09:00-09:30 - Mitä Suomen Azure-regioona tarkoittaa ohjelmistotaloille? Microsoftin uusi datakeskusalue Suomeen tuo pilvipalvelut lähemmäksi suomalaisia ohjelmistotaloja – olipa kyseessä startup, scaleup tai globaali toimija. Webinaarissa pureudumme siihen, mitä mahdollisuuksia uusi Azure-regioona avaa datan sijainnin, suorituskyvyn, sääntelyn ja asiakasvaatimusten näkökulmasta. Keskustelemme muun muassa: Miten datan paikallinen sijainti tukee asiakasvaatimuksia ja sääntelyä? Mitä hyötyä ohjelmistotaloille on pienemmästä latenssista ja paremmasta suorituskyvystä? Miten Azure-regioona tukee yhteismyyntiä ja skaalautumista Suomessa? Miten valmistautua teknisesti ja kaupallisesti uuden regioonan avaamiseen? Puhujat: Fama Doumbouya, Sales Director, Cloud Infra and Security, Microsoft Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft Katso nauhoite: Microsoft Finland – Software Developing Companies Monthly Community Series – Mitä Suomen Azure-regioona tarkoittaa ohjelmistotaloille? – Finland Cloud Champion 28.11. klo 09:00-09:30 - Pilvipalvelut omilla ehdoilla – mitä Microsoftin Sovereign Cloud tarkoittaa ohjelmistotaloille? Yhä useampi ohjelmistotalo kohtaa vaatimuksia datan sijainnista, sääntelyn noudattamisesta ja operatiivisesta kontrollista – erityisesti julkisella sektorilla ja säädellyillä toimialoilla. Tässä webinaarissa pureudumme siihen, miten Microsoftin uusi Sovereign Cloud -tarjonta vastaa näihin tarpeisiin ja mitä mahdollisuuksia se avaa suomalaisille ohjelmistoyrityksille. Keskustelemme muun muassa: Miten Sovereign Public ja Private Cloud eroavat ja mitä ne mahdollistavat? Miten datan hallinta, salaus ja operatiivinen suvereniteetti toteutuvat eurooppalaisessa kontekstissa? Mitä tämä tarkoittaa ohjelmistoyrityksille, jotka rakentavat ratkaisuja julkiselle sektorille tai säädellyille toimialoille? Puhujat: Juha Karppinen, National Security Officer, Microsoft Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft Katso nauhoite: Microsoft Finland – Software Developing Companies Monthly Community Series – Pilvipalvelut omilla ehdoilla – mitä Microsoftin Sovereign Cloud tarkoittaa ohjelmistotaloille? – Finland Cloud Champion 31.10. klo 09:00-09:30 - Kasvua ja näkyvyyttä ohjelmistotaloille – hyödynnä ISV Success ja Azure Marketplace rewards -ohjelmia Tässä webinaarissa pureudumme ohjelmistotaloille suunnattuihin Microsoftin keskeisiin kiihdytinohjelmiin, jotka tukevat kasvua, skaalautuvuutta ja kansainvälistä näkyvyyttä. Käymme läpi, miten ISV Success -ohjelma tarjoaa teknistä ja kaupallista tukea ohjelmistoyrityksille eri kehitysvaiheissa, ja miten Azure Marketplace toimii tehokkaana myyntikanavana uusien asiakkaiden tavoittamiseen. Lisäksi esittelemme Marketplace Rewards -edut, jotka tukevat markkinointia, yhteismyyntiä ja asiakashankintaa Microsoftin ekosysteemissä. Webinaari tarjoaa: Konkreettisia esimerkkejä ohjelmien hyödyistä Käytännön vinkkejä ohjelmiin liittymiseen ja hyödyntämiseen Näkemyksiä siitä, miten ohjelmistotalot voivat linjata strategiansa Microsoftin tarjoamiin mahdollisuuksiin Puhujat: Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft Nauhoite: Microsoft Finland – Software Developing Companies Monthly Community Series – Kasvua ja näkyvyyttä ohjelmistotaloille – hyödynnä ISV Success ja Azure Marketplace rewards -ohjelmia – Finland Cloud Champion 3.10. klo 09:00-09:30 - Autonomiset ratkaisut ohjelmistotaloille – Azure AI Foundry ja agenttiteknologioiden uudet mahdollisuudet Agenttiteknologiat mullistavat tapaa, jolla ohjelmistotalot voivat rakentaa älykkäitä ja skaalautuvia ratkaisuja. Tässä webinaarissa tutustumme siihen, miten Azure AI Foundry tarjoaa kehittäjille ja tuoteomistajille työkalut autonomisten agenttien rakentamiseen – mahdollistaen monimutkaisten prosessien automatisoinnin ja uudenlaisen asiakasarvon tuottamisen. Kuulet mm. Miten agenttiteknologiat muuttavat ohjelmistokehitystä ja liiketoimintaa. Miten Azure AI Foundry tukee agenttien suunnittelua, kehitystä ja käyttöönottoa. Miten ohjelmistotalot voivat hyödyntää agentteja kilpailuetuna. Puhujat: Juha Karvonen, Sr Partner Tech Strategist Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft Katso nauhoite täältä: Microsoft Finland – Software Developing Companies Monthly Community Series – Autonomiset ratkaisut ohjelmistotaloille – Azure AI Foundry ja agenttiteknologioiden uudet mahdollisuudet – Finland Cloud Champion 5.9.2025 klo 09:00-09:30 - Teknologiayritysten ja Microsoftin prioriteetit syksylle 2025. Tervetuloa jälleen mukaan Microsoftin webinaarisarjaan teknologiayrityksille! Jatkamme sarjassa kuukausittain pureutumista siihen, miten yhteistyö Microsoftin kanssa voi vauhdittaa kasvua ja avata uusia mahdollisuuksia eri vaiheissa oleville ohjelmistotaloille – olipa yritys sitten start-up, scale-up tai globaalia toimintaa harjoittava. Jokaisessa jaksossa jaamme konkreettisia esimerkkejä, näkemyksiä ja strategioita, jotka tukevat teknologia-alan yritysten liiketoiminnan kehitystä ja innovaatioita. Elokuun lopun jaksossa keskitymme syksyn 2025 prioriteetteihin ja uusiin mahdollisuuksiin, jotka tukevat ohjelmistoyritysten oman toiminnan suunnittelua, kehittämistä ja kasvun vauhdittamista. Käymme läpi, mitkä ovat Microsoftin strategiset painopisteet tulevalle tilikaudelle – ja ennen kaikkea, miten ohjelmistotalot voivat hyödyntää niitä omassa liiketoiminnassaan. Tavoitteena on tarjota kuulijoille selkeä ymmärrys siitä, miten oma tuote, palvelu tai markkinastrategia voidaan linjata ekosysteemin kehityksen kanssa, ja miten Microsoft voi tukea tätä matkaa konkreettisin keinoin. Puhujat: Mikko Marttinen, Sr Partner Development Manager, Microsoft Eetu Roponen, Sr Partner Development Manager, Microsoft Katso nauhoitus täältä: Teknologiayritysten ja Microsoftin prioriteetit syksylle 2025. – Finland Cloud Champion467Views0likes0CommentsPartner Case Study | CTERA
In step with Microsoft, working to unlock customer potential Headquartered in Israel and New York, with global offices worldwide, CTERA Networks Ltd strives to help organizations create a connected fabric of data to unlock its full potential. CTERA powers some of the world’s largest Fortune 500 enterprises and government agencies. Its customers are often highly distributed organizations, operating across numerous edge and core sites, including factories, hospitals, municipalities, law offices, and remote work environments with file estates ranging from hundreds of terabytes to hundreds of petabytes of data. CTERA has partnered with Microsoft since the early introduction of Microsoft Azure and today supports more than 60 customers running CTERA on Azure compute and storage infrastructure. CTERA’s Intelligent Data Platform, available for purchase through Microsoft Marketplace, delivers a software-defined global file system that intelligently caches data at distributed sites based on access frequency. It uses Azure Blob Storage as the authoritative, protected, immutable copy, and Azure Premium SSD and Azure Virtual Machines as core infrastructure components. In addition, CTERA integrates with Microsoft’s security, productivity, and AI ecosystem, including Microsoft Purview, Microsoft Sentinel, Microsoft Teams, Microsoft Word, Microsoft Excel, Microsoft PowerPoint, Microsoft Outlook, and Microsoft Copilot, enabling governed data to power secure collaboration and AI-driven insights. Swift enterprise deals and 68% more marketplace listing engagement Because CTERA focuses on large enterprises and government agencies, Microsoft Marketplace significantly streamlines procurement, legal review, and vendor onboarding processes, which are often complex. Offering a solution through the marketplace enables customers with a Microsoft Azure Consumption Commitment to apply eligible purchases toward their cloud budget while benefiting from Microsoft’s established trust and account relationships. The marketplace also increases product visibility and provides valuable insight into market engagement, allowing CTERA to better support customers and refine its strategy. Continue reading here Explore all case studies or submit your own Subscribe to case studies tag to follow all new case study posts. Don't forget to follow this blog to receive email notifications of new stories!35Views0likes0CommentsDriving AI‑Powered Healthcare: A Data & AI Webinar and Workshop Series
Across these sessions, you’ll learn how healthcare organizations are using Microsoft Fabric, advanced analytics, and AI to unify fragmented data, modernize analytics, and enable intelligent, scalable solutions, from enterprise reporting to AI‑powered use cases. Whether you’re just getting started or looking to accelerate adoption, these sessions offer practical guidance, real‑world examples, and hands‑on learning to help you build a strong data foundation for AI in healthcare. Date Topic Details Location Registration Link May 6 Webinar: Microsoft Fabric Foundations - A Simple Path to Modern Analytics and AI Discover how Microsoft Fabric consolidates fragmented analytics into a single integrated data platform, making it easier to deliver trusted insights and adopt AI without added complexity. Virtual Register May 13 Webinar: Reduce BI Sprawl, Cut Cost and Build an AI-Ready Analytics Foundation Learn how Power BI enables enterprise BI consolidation, consistent metrics, and secure, scalable analytics that support both operational reporting and emerging AI use cases. Virtual Register May 19-20 In Person Workshop: Driving AI‑Powered Healthcare: Advanced Analytics, AI, and Real‑World Impact Attend this two‑day, in‑person event to learn how healthcare organizations use Microsoft Fabric to unify data, accelerate AI adoption, and deliver measurable clinical and operational value. Day 1 focuses on strategy, architecture, and real‑world healthcare use cases, while Day 2 offers hands‑on workshops to apply those concepts through guided labs and agent‑powered solutions. Chicago Register May 27 Webinar: Unified Data Foundation for AI & Analytics - Leveraging OneLake and Microsoft Fabric This session shows how organizations can simplify fragmented data architectures by using Microsoft Fabric and OneLake as a single, governed foundation for analytics and AI. Virtual Register June 3-4 In Person Workshop: Driving AI‑Powered Healthcare: Advanced Analytics, AI, and Real‑World Impact Attend this two‑day, in‑person event to learn how healthcare organizations use Microsoft Fabric to unify data, accelerate AI adoption, and deliver measurable clinical and operational value. Day 1 focuses on strategy, architecture, and real‑world healthcare use cases, while Day 2 offers hands‑on workshops to apply those concepts through guided labs and agent‑powered solutions. New York Register June 10 Webinar: From Data to Decisions: How AI Data Agents in Microsoft Fabric Redefine Analytics Join us to learn how Fabric Data Agents enable users to interact with enterprise data through AI‑powered, governed agents that understand both data and business context. Virtual Register June 17 Webinar: Building the Intelligent Factory: A Unified Data and AI Approach to Life Science Manufacturing Discover how life science & MedTech manufacturers use Microsoft Fabric to integrate operational, quality, and enterprise data and apply AI‑powered analytics for smarter, faster manufacturing decisions. Virtual Register June 23-24 In Person Workshop: Driving AI‑Powered Healthcare: Advanced Analytics, AI, and Real‑World Impact Attend this two‑day, in‑person event to learn how healthcare organizations use Microsoft Fabric to unify data, accelerate AI adoption, and deliver measurable clinical and operational value. Day 1 focuses on strategy, architecture, and real‑world healthcare use cases, while Day 2 offers hands‑on workshops to apply those concepts through guided labs and agent‑powered solutions. Dallas RegisterUnderstanding Compliance Between Commercial, Government, DoD & Secret Offerings - May 2026 Update
Understanding compliance between Commercial, Government, DoD & Secret Offerings: There remains much confusion as to what service supports what standards best. If you have CMMC, DFARS, ITAR, FedRAMP, CJIS, IRS and other regulatory requirements and you are trying to understand what service is the best fit for your organization then you should read this article.74KViews5likes7CommentsWhen cloud apps become a weak link: How FortiAppSec Cloud in Microsoft Marketplace bridges the gap
In this guest blog post, Srija Reddy Allam, Cloud Security/DevOps Architect, Fortinet, discusses the increase of attacks targeted at web applications and APIs and how FortiAppSec Cloud in Microsoft Marketplace provides a layer of adaptive security to address the challenge.187Views2likes1CommentGiving the Copilot SDK Agent a "hardware-level helmet" using Kata microVM on AKS
A Moment That Made Me Pause I was recently building an Agent service with the GitHub Copilot SDK. After getting it up and running, I went back through the execution logs and something jumped out at me: In a single conversation turn, the Agent had executed a shell command, read several files, and pulled down a third-party MCP server from npm via npx — all on its own. I didn't hard-code any of that. The model decided at runtime to run those commands, read those files, and install that package. That's when it hit me: a significant chunk of the code running inside this container was written on the fly — by the model, not by me. This is fundamentally different from a traditional web service. With a regular app, every line of code is written by a human, reviewed, and tested before it reaches production. But an AI Agent? Part of its behavior is generated at runtime. You don't know in advance what it's going to execute. So the question becomes: is the container we put it in actually strong enough? How Container Isolation Actually Works (And Where It Falls Short) Let me use an analogy. Think of a traditional container as an apartment in a building. Each apartment has its own walls — namespaces and cgroups keep things separated. From the inside, it feels like you have your own place. But every apartment shares the same roof — the host Linux kernel. Most of the time, this is fine. But if someone finds a crack in the roof — a kernel vulnerability — they can climb up from their apartment, walk across the roof, and drop into any other apartment in the building. That's a container escape. For a standard web service, this risk is manageable — the code inside your container is predictable. But an AI Agent is different. The code running inside the container is inherently unpredictable — it's not an external attacker you're worried about, it's the tenant itself. Docker laid this out clearly in Comparing Sandboxing Approaches for AI Agents: AI Agents are a class of workload that inherently requires stronger sandboxing. The shared-kernel model of traditional containers isn't enough. So what is enough? Meet the microVM: A Private Roof for Every Apartment Sticking with the building analogy — if the problem is a shared roof, the fix is obvious: give every apartment its own roof. You still live in an apartment (container). The building is still managed the same way (Kubernetes). But the ceiling above your head is now yours alone. Even if you punch through it, you only reach your own roof — not your neighbor's. That's the core idea behind a microVM. Koyeb published a great explainer called What Is a microVM. Here's the essence: It's a virtual machine — with its own independent guest kernel, fully isolated from the host kernel. This is where the security comes from. But it's a stripped-down VM — only the bare essentials: CPU, memory, network, block storage. No USB controllers, no sound cards, no GPU passthrough. So it's fast and light — millisecond boot times, small memory footprint, close to the container experience. One line summary: microVM = VM-grade isolation + near-container-grade lightness. How Does Kubernetes Use microVMs? Enter Kata Containers Knowing microVMs are great is one thing — but Kubernetes schedules Pods and containers, not VMs. How do you bridge these two worlds? That's exactly what Kata Containers does. Their tagline nails it: "The speed of containers, the security of VMs." Kata acts as a translation layer between Kubernetes and microVMs: From Kubernetes' perspective, it's still a standard Pod — scheduled, managed, and monitored normally. Under the hood, that Pod is actually running inside a lightweight VM with its own kernel. You don't change your application code. You don't change your CI/CD pipeline. You just tell Kubernetes: "Run this Pod with Kata's RuntimeClass." Kata handles the rest. On AKS, Microsoft has integrated Kata out of the box under the name Pod Sandboxing. The hypervisor is Microsoft Hyper-V (not QEMU), and the RuntimeClass is called kata-vm-isolation. You create a special node pool, and AKS sets everything up automatically. Now Let's Look at a Real Example Enough theory — let me walk you through something concrete. I built a sample called AKS_MicroVM that does one thing: Run a GitHub Copilot SDK Agent service on AKS, enforced to run inside kata-vm-isolation — a microVM sandbox. Here's the architecture: HTTPS request comes in └─ AKS Node Pool (KataVmIsolation enabled) └─ Pod (runtimeClassName: kata-vm-isolation) └─ Dedicated Hyper-V microVM └─ FastAPI service (Python / uvicorn) └─ GitHubCopilotAgent └─ Copilot CLI (Node.js) └─ MCP servers / tools Isolated guest kernel + seccomp + cgroup Egress restricted by NetworkPolicy From the outside, it's just an ordinary AKS Pod. On the inside, the app runs in its own micro virtual machine with a dedicated kernel. Project Structure The entire sample is just these files: app/ ← Agent service (Python) main.py ← FastAPI endpoints agent.py ← Copilot Agent wrapper tools.py ← Example function tools requirements.txt Dockerfile ← Python 3.12 + Node 20 + Copilot CLI k8s/ ← Kubernetes manifests namespace.yaml runtimeclass.yaml ← Reference (AKS auto-creates this) secret.example.yaml ← Token placeholder deployment.yaml ← The key file: enforces kata-vm-isolation service.yaml networkpolicy.yaml ← Locks down ingress/egress infra/ ← Infrastructure scripts 01-create-aks.sh ← Create the cluster 02-build-push.sh ← Build image, push to ACR 03-deploy.sh ← Deploy everything Three shell scripts to set up infrastructure, six YAML files to deploy the service. That's it. Not Just a microVM: Five Layers of Defense I want to emphasize this: the sample doesn't just slap on a microVM and call it a day. It stacks five layers of protection: What you're worried about How this layer addresses it Malicious code escaping the container kata-vm-isolation → dedicated microVM with its own kernel Privilege escalation inside the container runAsNonRoot + drop ALL caps + read-only filesystem + seccomp Agent phoning home to unauthorized endpoints NetworkPolicy allowlist — only Copilot/GitHub/MCP egress permitted Token leakage K8s Secret injection (upgradeable to Key Vault via CSI) Model instructing the Agent to do something dangerous on_permission_request defaults to deny; only allowlisted operations proceed The microVM is the outermost wall — hardware-grade isolation. But inside that wall, there are still guards, access controls, and surveillance cameras. You need all of them. Six Steps to Deploy # ① Create an AKS cluster with Kata support bash infra/01-create-aks.sh # ② Verify the RuntimeClass is ready kubectl get runtimeclass kata-vm-isolation # ③ Build the image and push to ACR (script auto-detects your ACR) bash infra/02-build-push.sh # ④ Add your GitHub Copilot token # Edit k8s/secret.example.yaml → rename to secret.yaml (don't commit it!) # ⑤ Deploy everything bash infra/03-deploy.sh # ⑥ Access via API server proxy kubectl proxy --port=8001 Then chat with the Agent: curl -s -X POST \ http://localhost:8001/api/v1/namespaces/copilot-agent/services/copilot-agent:80/proxy/chat \ -H 'content-type: application/json' \ -d '{"message":"Briefly introduce Kata Containers."}' Want streaming output? Use the stream endpoint: curl -N -X POST \ http://localhost:8001/api/v1/namespaces/copilot-agent/services/copilot-agent:80/proxy/chat/stream \ -H 'content-type: application/json' \ -d '{"message":"List 3 Linux kernel hardening tips","stream":true}' How to Verify It's Actually Running in a microVM One command: kubectl -n copilot-agent exec deploy/copilot-agent -- uname -r If the kernel version differs from the node's kernel — your Pod is running in its own guest kernel, not sharing the host's. Proof done. Gotchas I Hit So You Don't Have To kubectl port-forward doesn't work with Kata Pods. This is the easiest trap to fall into. The app listener runs inside the microVM, but port-forward connects to the empty sandbox netns on the host — you'll get connection refused. Use kubectl proxy instead. Token environment variable names. The Copilot CLI expects GH_TOKEN or GITHUB_TOKEN — not a custom name. The Deployment already injects both from the same Secret. Read-only filesystem needs emptyDir mounts. The container runs with readOnlyRootFilesystem: true, but the Copilot CLI needs to write to /home/agent/.cache at startup. The Deployment mounts emptyDir volumes at .cache, .copilot, and /tmp — miss one and the CLI won't start. Keep on_permission_request on deny-by-default. The Agent's tool calls go through a permission gate that defaults to deny, with an allowlist for approved operations. Don't switch this to approve-all in production — ever. Wrapping Up: The Thread That Ties It All Together Let me trace the logic one more time: ① Scenario: AI Agents inherently run model-generated, untrusted code inside containers ② Problem: Traditional containers share the host kernel — one escape compromises the entire node ③ Insight: We need hardware-grade isolation, stronger than namespaces alone ④ Solution: microVMs — a dedicated guest kernel for every Pod ⑤ Integration: Kata Containers brings microVM support to Kubernetes natively; AKS Pod Sandboxing makes it turnkey ⑥ Practice: The AKS_MicroVM sample — six steps to deploy, five layers of defense In the age of AI Agents, a container isn't just a box for your application — it's a box for uncertainty. It needs a stronger shell. The microVM is that shell. Full source code: https://github.com/kinfey/Multi-AI-Agents-Cloud-Native/tree/main/code/AKS_MicroVM Further reading: What is a microVM? — Koyeb Comparing Sandboxing Approaches for AI Agents — Docker Kata Containers147Views0likes0CommentsAction required: Kerberos RC4 hardening may affect Azure Files Active Directory Domain Services
A Windows security hardening change beginning in April 2026 updates default Kerberos encryption behavior and may impact customers using Azure Files with Active Directory Domain Services (AD DS) authentication over SMB. If you created Azure Files shares prior to 2023, or chose RC4 encryption for your file shares, you will need to reconfigure to use AES-256 to avoid disruption to file share access. This is in accordance with the updated security posture and recommendation from Windows CVE-2026-20833. Background Azure Files uses Kerberos authentication for identity-based access when integrated with on-premises Active Directory Domain Services (AD DS). AES-256 Kerberos encryption has been supported since AzFilesHybrid module v0.2.2, and it has been the default since v0.2.5. Historically, RC4 was the only supported option until AES-256 support was added. This is a Windows platform security hardening change; Azure Files service behavior is not being modified. You may be impacted if: You use Kerberos-based SMB access to Azure Files with AD DS authentication, and Kerberos encryption settings are RC4-only or unset (null) for relevant AD objects, service accounts, or computer accounts associated with Azure Files authentication. When will this happen: April 2026 – July 2026: Install the Windows security update and validate access. Domain controllers will default to issuing AES-256 tickets when msDS-SupportedEncryptionTypes is not explicitly set. After July 2026: Manual rollback is removed. If you have not migrated to AES-256 by then, Kerberos-based SMB access to your Azure Files shares may fail. What you should do now: Find out if you are impacted, run the following PowerShell command on a domain-joined machine, with read access to AD. This identifies storage accounts that use Azure Files with AD DS authentication but have not been upgraded to AES-256 or follow the detection steps in aka.ms/rc4azurefiles: Get-ADObject ` -LDAPFilter "(&(servicePrincipalName=*.file.core.windows.net)(!(msDS-SupportedEncryptionTypes=*)))" -Properties servicePrincipalName, msDS-SupportedEncryptionTypes | Select-Object Name, ObjectClass, servicePrincipalName, msDS-SupportedEncryptionTypes Update configurations to support and prefer AES256-based Kerberos ticket encryption. Validate end-to-end SMB authentication and application access to Azure Files shares. Run klist purge from an elevated command prompt to clear any cached Kerberos tickets that still use RC4. Remount the Azure file share. For any questions, please reach out to the team at azurefiles@microsoft.com Resources: Azure Files documentation on this change: aka.ms/rc4azurefiles Read the full Windows hardening guidance: How to manage Kerberos KDC usage of RC4 for service account ticket issuance changes related to CVE-2026-20833. Learn about RC4 usage in Windows and its risks: Detect and remediate RC4 usage in Kerberos. Learn more about the related vulnerability: CVE-2026-20833. Windows Server Blog: Beyond RC4 for Windows authentication451Views0likes0Comments