<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>Linux and Open Source Blog articles</title>
    <link>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/bg-p/LinuxandOpenSourceBlog</link>
    <description>Linux and Open Source Blog articles</description>
    <pubDate>Fri, 17 Apr 2026 12:57:47 GMT</pubDate>
    <dc:creator>LinuxandOpenSourceBlog</dc:creator>
    <dc:date>2026-04-17T12:57:47Z</dc:date>
    <item>
      <title>Dissecting LLM Container Cold-Start: Where the Time Actually Goes</title>
      <link>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/dissecting-llm-container-cold-start-where-the-time-actually-goes/ba-p/4508831</link>
      <description>&lt;P&gt;Cold-start latency determines whether GPU clusters can scale to zero, how fast they can autoscale, and whether bursty or low-QPS workloads are economically viable. Most optimization effort targets the container pull path – faster registries, lazy-pull snapshotters, different compression formats. But “cold-start” is actually a composite of pull, runtime startup, and model initialization, and the dominant phase varies dramatically by inference engine. An optimization that cuts time-to-first-token for one engine can be irrelevant for another, even on identical infrastructure.&lt;/P&gt;
&lt;H2&gt;What we measured&lt;/H2&gt;
&lt;P&gt;We decomposed cold-start for two architecturally different engines – vLLM (Python/CUDA, heavy JIT compilation) and llama.cpp (C++, minimal runtime) – running Llama 3.1 8B on A100 GPUs. Every run starts from a completely clean slate: containerd stopped, all state wiped, kernel page caches dropped. No warm starts, no pre-pulling, no caching.&lt;/P&gt;
&lt;P&gt;We break TTFT into three phases: &lt;STRONG&gt;pull&lt;/STRONG&gt; (download + decompression + snapshot creation), &lt;STRONG&gt;startup&lt;/STRONG&gt; (container start → server ready), and &lt;STRONG&gt;first inference&lt;/STRONG&gt; (first API response, including model weight loading for engines that defer it). We tested across three snapshotters (overlayfs, EROFS, Nydus) with gzip and uncompressed images, pulling from same-region Azure Container Registry.&lt;/P&gt;
&lt;H2&gt;Setup&lt;/H2&gt;
&lt;P&gt;All experiments ran on an NVIDIA A100 80GB (Azure NC24ads_A100_v4), pulling from same-region Azure Container Registry. Images were built with &lt;A href="https://github.com/kaito-project/aikit" target="_blank" rel="noopener"&gt;AIKit&lt;/A&gt;, which produces &lt;A href="https://github.com/modelpack/model-spec" target="_blank" rel="noopener"&gt;ModelPack&lt;/A&gt;-compliant OCI artifacts with uncompressed model weight layers, Cosign signatures, SBOMs, and provenance attestations. These are supply chain properties you lose when model weights live on a shared drive.&lt;/P&gt;
&lt;H2&gt;vLLM: startup dominates, pull barely matters&lt;/H2&gt;
&lt;P&gt;vLLM loads model weights, runs torch.compile, captures CUDA graphs for multiple batch shapes, allocates KV cache, and warms up, all before serving the first request. This takes ~176 seconds regardless of how fast the image arrived.&lt;/P&gt;
&lt;P&gt;The breakdown makes the bottleneck obvious: the green bar (startup) is nearly constant across all four variants, swamping any pull-time differences.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Figure 1: vLLM cold-start breakdown. Startup (green, ~176s) dominates regardless of snapshotter.&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Method&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Pull&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Startup&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;1st Inference&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;TTFT&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;overlayfs (gzip)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;140.8s ±5.5&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;176.0s ±3.2&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;0.16s&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;317.2s ±2.2&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;overlayfs (uncomp.)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;129.9s ±3.3&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;180.8s ±12.2&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;0.16s&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;310.9s ±8.9&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;EROFS (gzip)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;158.9s ±8.8&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;175.3s ±0.8&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;0.16s&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;334.4s ±8.7&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;EROFS (uncomp.)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;166.3s ±21.1&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;177.3s ±12.8&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;0.16s&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;343.8s ±8.2&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 20.00%" /&gt;&lt;col style="width: 20.00%" /&gt;&lt;col style="width: 20.00%" /&gt;&lt;col style="width: 20.00%" /&gt;&lt;col style="width: 20.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&lt;EM&gt;Llama 3.1 8B Q4_K_M, ~14 GB image, n=2–3 per variant. ± = sample standard deviation. Three of twelve runs hit intermittent NVIDIA container runtime crashes (exit code 120, unrelated to snapshotters) and were excluded. We excluded Nydus because FUSE-streaming the 14 GB Python/CUDA stack caused startup to exceed 900s. Steady-state inference: ~0.134s across all snapshotters.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;44% pull, 56% startup.&lt;/STRONG&gt; Dropping gzip saves 11 seconds on a 317-second cold start (&lt;STRONG&gt;1.02x&lt;/STRONG&gt;). If your engine is vLLM, optimizing the pull pipeline is the wrong lever.&lt;/P&gt;
&lt;H2&gt;llama.cpp: pull dominates, compression is the bottleneck&lt;/H2&gt;
&lt;P&gt;llama.cpp has the opposite profile. Its C++ runtime starts in 2–5 seconds, so the pull becomes the majority of cold-start. This is where filesystem and compression choices actually matter.&lt;/P&gt;
&lt;P&gt;Here the picture flips. Pull (blue) is the widest bar, and the gzip-to-uncompressed difference is visible at a glance:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Figure 2: llama.cpp cold-start breakdown. Pull time (blue) dominates for gzip variants.&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Method&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Pull&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Startup&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;1st Inference&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;TTFT&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;overlayfs (gzip)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;88.3s ±0.2&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;5.3s ±0.5&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;45.1s ±1.4&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;138.8s ±0.8&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;overlayfs (uncomp.)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;56.3s ±3.1&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;2.0s ±0.0&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;44.2s ±0.1&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;102.4s ±3.1&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;EROFS (gzip)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;92.0s ±2.3&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;6.1s ±0.5&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;44.0s ±0.2&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;142.3s ±1.9&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;EROFS (uncomp.)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;58.8s ±0.6&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;2.0s ±0.0&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;44.0s ±0.1&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;104.8s ±0.5&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 20.00%" /&gt;&lt;col style="width: 20.00%" /&gt;&lt;col style="width: 20.00%" /&gt;&lt;col style="width: 20.00%" /&gt;&lt;col style="width: 20.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&lt;EM&gt;Llama 3.1 8B Q4_K_M, ~8 GB image, n=3 per variant, 12/12 runs succeeded. First inference includes model weight loading into GPU VRAM (~43s) plus token generation (~1.5s). Steady-state inference: ~1.5s across all snapshotters.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;64% pull, 4% startup, 33% model loading.&lt;/STRONG&gt; Dropping gzip saves 32 seconds (&lt;STRONG&gt;1.35x&lt;/STRONG&gt;) with zero infrastructure changes.&lt;/P&gt;
&lt;H2&gt;Engine comparison&lt;/H2&gt;
&lt;P&gt;Placed side by side, the two engines tell opposite stories about the same infrastructure:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Figure 3: Where cold-start time goes. vLLM is compute-bound; llama.cpp is pull-bound.&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;vLLM&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;llama.cpp&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Time saved by dropping gzip&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;11s (3% of TTFT)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;32s (23% of TTFT)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Startup time&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;176–181s&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;2–5s&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Speedup from dropping gzip&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;1.02x&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;1.35x&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 33.33%" /&gt;&lt;col style="width: 33.33%" /&gt;&lt;col style="width: 33.33%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;Same optimization, completely different impact. Before investing in pull optimization (compression changes, lazy-pull infrastructure, registry tuning), profile your engine’s startup. If startup dominates, the pull isn’t where the time goes.&lt;/P&gt;
&lt;H2&gt;Why gzip hurts: model weights are incompressible&lt;/H2&gt;
&lt;P&gt;The AIKit image is 8.7 GB uncompressed, 6.6 GB with gzip (a modest 0.76x ratio). But this ratio hides what’s really happening:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Layer type&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Size&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;% of image&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Gzip ratio&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Model weights (GGUF)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;4.9 GB&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;56%&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;~1.00x (quantized binary, no redundancy)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;CUDA + system layers&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;~3.8 GB&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;44%&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;~0.46x (compresses well)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;The GGUF file is already quantized to 4-bit precision. Gzip reads every byte, burns CPU, and produces output the same size as the input. You’re paying full decompression cost on 56% of the image for zero size reduction.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Bottom line:&lt;/STRONG&gt; gzip is doing real work on less than half your image and producing zero savings on the rest. Dropping it costs nothing and removes a bottleneck from every cold start.&lt;/P&gt;
&lt;H2&gt;The Nydus prefetch finding&lt;/H2&gt;
&lt;P&gt;If decompression is the bottleneck, what about skipping the full pull entirely?&lt;/P&gt;
&lt;P&gt;Nydus lazy-pull takes a fundamentally different approach: it fetches only manifest metadata during “pull” (~0.7s), then streams model data on-demand via FUSE as the container reads it. Nydus TTFT isn’t directly comparable to the full-pull methods above because the download cost shifts from the pull column to the inference column.&lt;/P&gt;
&lt;P&gt;With prefetch enabled, Nydus achieved 77.8s TTFT for llama.cpp vs 139.1s for overlayfs gzip. The critical detail is the prefetch_all flag:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Figure 4: Nydus prefetch ON vs OFF. One config flag, 2.87x difference. Overlayfs gzip shown as baseline.&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Configuration&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;1st Inference&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;TTFT&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Nydus, prefetch ON&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;72.4s ±0.6&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;77.8s ±0.5&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Nydus, prefetch OFF&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;218.6s ±2.9&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;223.4s ±2.9&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;overlayfs gzip (baseline)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;44.0s ±0.4&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;139.1s ±1.9&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 33.33%" /&gt;&lt;col style="width: 33.33%" /&gt;&lt;col style="width: 33.33%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&lt;EM&gt;n=3 per config, 9/9 runs succeeded. Data: &lt;/EM&gt;&lt;A href="https://github.com/robert-cronin/erofs-repro-repo/blob/main/results/03-prefetch-config-20260401-030725.csv" target="_blank" rel="noopener"&gt;&lt;EM&gt;03-prefetch-config-20260401-030725.csv&lt;/EM&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;One flag in nydusd-config.json, &lt;STRONG&gt;2.87x difference&lt;/STRONG&gt;. Without prefetch, every model weight page fault fires an individual HTTP range request to the registry. With prefetch_all=true, Nydus streams the full blob in the background while the container starts, so chunks arrive ahead of the GPU’s read pattern.&lt;/P&gt;
&lt;P&gt;Even with prefetch, Nydus first inference is ~28s slower than overlayfs (72s vs 44s) due to FUSE kernel-user roundtrips during model mmap. Nydus wins on total TTFT because it eliminates the blocking pull, but this overhead means its advantage shrinks on faster networks.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Bottom line:&lt;/STRONG&gt; Nydus lazy-pull can halve cold-start for pull-bound engines, but only if prefetch is on. Treat prefetch_all=true as a hard requirement, not a tuning knob.&lt;/P&gt;
&lt;H2&gt;How to apply these findings&lt;/H2&gt;
&lt;H3&gt;Pick your optimization by engine type&lt;/H3&gt;
&lt;P&gt;The right optimization depends on where your engine spends its cold-start time. This table summarizes the tradeoffs:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Engine type&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Dominant phase&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Speedup from dropping gzip&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Nydus viable?&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Best optimization&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;What NOT to optimize&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;vLLM / TensorRT-LLM&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Startup (56%)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;1.02x&lt;/STRONG&gt; — negligible&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No — FUSE + Python/CUDA stack exceeded 900s in our tests&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Cache torch.compile artifacts and CUDA graphs&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Pull pipeline (it’s &amp;lt;44% of TTFT and already fast enough)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;llama.cpp / ONNX Runtime&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Pull (64%)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;1.35x&lt;/STRONG&gt; — 32s saved&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes, with prefetch_all=true (77.8s TTFT vs 139s gzip baseline)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Drop gzip on weight layers; consider lazy-pull on slow links&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Startup (already 2–5s; no room to improve)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Large dense models (70B+)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Pull (projected)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;gt;1.35x&lt;/STRONG&gt; — scales with image size&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes, strongest case for lazy-pull&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Uncompressed or zstd; Nydus prefetch on bandwidth-constrained links&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;—&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 16.67%" /&gt;&lt;col style="width: 16.67%" /&gt;&lt;col style="width: 16.67%" /&gt;&lt;col style="width: 16.67%" /&gt;&lt;col style="width: 16.67%" /&gt;&lt;col style="width: 16.67%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H3&gt;Recommendations&lt;/H3&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Profile your engine’s startup before touching the pull pipeline.&lt;/STRONG&gt; If CUDA compilation dominates (vLLM, TensorRT-LLM), no amount of pull optimization will help. Cache torch.compile artifacts and CUDA graphs instead — production clusters that do this reduce vLLM restarts to ~45–60s.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Drop gzip on model weight layers.&lt;/STRONG&gt; For pull-bound engines (llama.cpp, ONNX Runtime), this is the single highest-ROI change: build with --output=type=image,compression=uncompressed, or use &lt;A href="https://github.com/kaito-project/aikit" target="_blank" rel="noopener"&gt;AIKit&lt;/A&gt;, which defaults to uncompressed weight layers. Quantized model weights (GGUF, safetensors) are already dense binary — gzip burns CPU for a ~1.00x compression ratio on 56% of the image.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;If using Nydus, set &lt;/STRONG&gt;&lt;STRONG&gt;prefetch_all=true&lt;/STRONG&gt;&lt;STRONG&gt;.&lt;/STRONG&gt; Without it, every weight page fault triggers an individual HTTP range request and cold-start is &lt;STRONG&gt;2.87x slower&lt;/STRONG&gt;. This is a single flag in nydusd-config.json.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Package models as signed OCI artifacts, not volume mounts.&lt;/STRONG&gt; Three CNCF projects implement this pipeline end-to-end: &lt;A href="https://github.com/modelpack/model-spec" target="_blank" rel="noopener"&gt;ModelPack&lt;/A&gt; defines the OCI artifact spec (model metadata, architecture, quantization format). &lt;A href="https://github.com/kaito-project/aikit" target="_blank" rel="noopener"&gt;AIKit&lt;/A&gt; builds ModelPack-compliant images with Cosign signatures, SBOMs, and provenance attestations — supply chain guarantees you lose when weights live on a shared drive. &lt;A href="https://github.com/kaito-project/kaito" target="_blank" rel="noopener"&gt;KAITO&lt;/A&gt; handles the Kubernetes deployment: GPU node provisioning, inference engine setup, and API exposure. Together they cover packaging → build → deploy, and they produce the exact image layout these benchmarks measured.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H3&gt;Why this matters: the cost of cold-start&lt;/H3&gt;
&lt;P&gt;On an A100 node (~$3–4/hr on major clouds), a 5-minute vLLM cold start burns ~$0.30 in idle GPU time per pod. That sounds small until you multiply it: a cluster that scales 50 pods to zero overnight and restarts them each morning wastes ~$15/day — over $5,000/year — on GPUs sitting idle during pull and CUDA compilation. More critically, cold-start latency determines whether scale-to-zero is feasible at all. If cold-start exceeds your SLO (say, 30s for an interactive app), you’re forced to keep warm replicas running 24/7, which can 2–3x your GPU spend. Cutting llama.cpp cold-start from 139s to 103s by dropping gzip doesn’t just save 36 seconds — it moves the needle on whether autoscaling is viable for your workload.&lt;/P&gt;
&lt;H2&gt;What this doesn’t cover&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;zstd compression:&lt;/STRONG&gt; decompresses 5–10x faster than gzip; containerd supports it natively. The most obvious gap in this analysis.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Pre-pulling and caching:&lt;/STRONG&gt; production clusters pre-pull images and cache CUDA graphs, reducing vLLM restarts to ~45–60s. We measure the cold case: scale-from-zero events and first-time deployments.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Volume-mounted weights:&lt;/STRONG&gt; skips the pull entirely, but loses supply chain properties (signing, scanning, provenance).&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Larger models (70B+):&lt;/STRONG&gt; pull would dominate more, increasing the gzip penalty.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Sample size:&lt;/STRONG&gt; n=3 per AIKit variant, n=2–3 per vLLM variant. The gzip finding for llama.cpp is statistically significant (Welch’s t-test, p=0.0014, Cohen’s d=16.3; &lt;A href="https://github.com/robert-cronin/erofs-repro-repo/blob/main/results/verify-significance.py" target="_blank" rel="noopener"&gt;verification script&lt;/A&gt;). Other comparisons are directional.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Reproduce it&lt;/H2&gt;
&lt;P&gt;Scripts and raw data: &lt;A href="https://github.com/robert-cronin/erofs-repro-repo" target="_blank" rel="noopener"&gt;erofs-repro-repo&lt;/A&gt;. Data for this post: &lt;A href="https://github.com/robert-cronin/erofs-repro-repo/blob/main/results/02-aikit-five-way-20260401-004716.csv" target="_blank" rel="noopener"&gt;02-aikit-five-way-20260401-004716.csv&lt;/A&gt; and &lt;A href="https://github.com/robert-cronin/erofs-repro-repo/blob/main/results/01-vllm-four-way-20260331-113848.csv" target="_blank" rel="noopener"&gt;01-vllm-four-way-20260331-113848.csv&lt;/A&gt;. Full analysis: &lt;A href="https://github.com/robert-cronin/erofs-benchmarks/blob/main/docs/report/README.md" target="_blank" rel="noopener"&gt;technical report&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Wed, 15 Apr 2026 22:38:11 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/dissecting-llm-container-cold-start-where-the-time-actually-goes/ba-p/4508831</guid>
      <dc:creator>robcronin</dc:creator>
      <dc:date>2026-04-15T22:38:11Z</dc:date>
    </item>
    <item>
      <title>Agent Governance Toolkit: Architecture Deep Dive, Policy Engines, Trust, and SRE for AI Agents</title>
      <link>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/agent-governance-toolkit-architecture-deep-dive-policy-engines/ba-p/4510105</link>
      <description>&lt;P&gt;Last week we announced the &lt;A class="lia-external-url" href="https://aka.ms/agt-opensource-blog" target="_blank"&gt;Agent Governance Toolkit&lt;/A&gt; on the Microsoft Open Source Blog, an open-source project that brings runtime security governance to autonomous AI agents. In that announcement, we covered the&amp;nbsp;&lt;STRONG&gt;why&lt;/STRONG&gt;: AI agents are making autonomous decisions in production, and the security patterns that kept systems safe for decades need to be applied to this new class of workload.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this post, we'll go deeper into the&amp;nbsp;&lt;STRONG&gt;how&lt;/STRONG&gt;: the architecture, the implementation details, and what it takes to run governed agents in production.&lt;/P&gt;
&lt;H2&gt;The Problem: Production Infrastructure Meets Autonomous Agents&lt;/H2&gt;
&lt;P&gt;If you manage production infrastructure, you already know the playbook: least privilege, mandatory access controls, process isolation, audit logging, and circuit breakers for cascading failures. These patterns have kept production systems safe for decades.&lt;/P&gt;
&lt;P&gt;Now imagine a new class of workload arriving on your infrastructure, AI agents that autonomously execute code, call APIs, read databases, and spawn sub-processes. They reason about what to do, select tools, and act in loops. And in many current deployments, they do all of this without the security controls you'd demand of any other production workload.&lt;/P&gt;
&lt;P&gt;That gap is what led us to build the &lt;A class="lia-external-url" href="https://aka.ms/agent-governance-toolkit" target="_blank"&gt;Agent Governance Toolkit&lt;/A&gt;: an open-source project, that applies proven security concepts from operating systems, service meshes, and SRE to the emerging world of autonomous AI agents.&lt;/P&gt;
&lt;P&gt;To frame this in familiar terms: most AI agent frameworks today are like running every process as root, no access controls, no isolation, no audit trail. The Agent Governance Toolkit is the kernel, the service mesh, and the SRE platform for AI agents.&lt;/P&gt;
&lt;P&gt;When an agent calls a tool, say, `DELETE FROM users WHERE created_at &amp;lt; NOW()`, there is typically no policy layer checking whether that action is within scope. There is no identity verification when one agent communicates with another. There is no resource limit preventing an agent from making 10,000 API calls in a minute. And there is no circuit breaker to contain cascading failures when things go wrong.&lt;/P&gt;
&lt;H2&gt;OWASP Agentic Security Initiative&lt;/H2&gt;
&lt;P&gt;In December 2025, &lt;A class="lia-external-url" href="https://aka.ms/agt-owasp" target="_blank"&gt;OWASP published the Agentic AI Top 10:&lt;/A&gt;&amp;nbsp;the first formal taxonomy of risks specific to autonomous AI agents. The list reads like a security engineer's nightmare: goal hijacking, tool misuse, identity abuse, memory poisoning, cascading failures, rogue agents, and more.&lt;/P&gt;
&lt;P&gt;If you've ever hardened a production server, these risks will feel both familiar and urgent. The Agent Governance Toolkit is designed to help address all 10 of these risks through deterministic policy enforcement, cryptographic identity, execution isolation, and reliability engineering patterns.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Note&lt;/STRONG&gt;: The OWASP Agentic Security Initiative has since adopted the ASI 2026 taxonomy (ASI01–ASI10). The toolkit's copilot-governance package now uses these identifiers with backward compatibility for the original AT numbering.&lt;/EM&gt;&lt;/P&gt;
&lt;H2&gt;Architecture: Nine Packages, One Governance Stack&lt;/H2&gt;
&lt;P&gt;The toolkit is structured as a v3.0.0 Public Preview monorepo with nine independently &lt;A class="lia-external-url" href="https://aka.ms/agt-install" target="_blank"&gt;installable packages:&lt;/A&gt;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Package&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;What It Does&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Agent OS&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Stateless policy engine, intercepts agent actions before execution with configurable pattern matching and semantic intent classification&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Agent Mesh&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Cryptographic identity (DIDs with Ed25519), Inter-Agent Trust Protocol (IATP), and trust-gated communication between agents&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Agent Hypervisor&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Execution rings inspired by CPU privilege levels, saga orchestration for multi-step transactions, and shared session management&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Agent Runtime&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Runtime supervision with kill switches, dynamic resource allocation, and execution lifecycle management&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Agent SRE&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;SLOs, error budgets, circuit breakers, chaos engineering, and progressive delivery, production reliability practices adapted for AI agents&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Agent Compliance&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Automated governance verification with compliance grading and regulatory framework mapping (EU AI Act, NIST AI RMF, HIPAA, SOC 2)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Agent Lightning&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Reinforcement learning training governance with policy-enforced runners and reward shaping&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Agent Marketplace&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Plugin lifecycle management with Ed25519 signing, trust-tiered capability gating, and SBOM generation&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Integrations&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;20+ framework adapters for LangChain, CrewAI, AutoGen, Semantic Kernel, Google ADK, Microsoft Agent Framework, OpenAI Agents SDK, and more&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H2&gt;Agent OS: The Policy Engine&lt;/H2&gt;
&lt;P&gt;Agent OS intercepts agent tool calls before they execute:&lt;/P&gt;
&lt;P&gt;from agent_os import StatelessKernel, ExecutionContext, Policy&lt;BR /&gt;&lt;BR /&gt;kernel = StatelessKernel()&lt;BR /&gt;ctx = ExecutionContext(&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; agent_id="analyst-1",&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; policies=[&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Policy.read_only(),&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; # No write operations&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Policy.rate_limit(100, "1m"),&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; # Max 100 calls/minute&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Policy.require_approval(&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; actions=["delete_*", "write_production_*"],&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; min_approvals=2,&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; approval_timeout_minutes=30,&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ),&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; ],&lt;BR /&gt;)&lt;BR /&gt;&lt;BR /&gt;result = await kernel.execute(&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; action="delete_user_record",&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; params={"user_id": 12345},&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; context=ctx,&lt;BR /&gt;)&lt;/P&gt;
&lt;P&gt;The policy engine works in two layers: configurable pattern matching (with sample rule sets for SQL injection, privilege escalation, and prompt injection that users customize for their environment) and a semantic intent classifier that helps detect dangerous goals regardless of phrasing. When an action is classified as `DESTRUCTIVE_DATA`, `DATA_EXFILTRATION`, or `PRIVILEGE_ESCALATION`, the engine blocks it, routes it for human approval, or downgrades the agent's trust level, depending on the configured policy.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Important&lt;/STRONG&gt;: All policy rules, detection patterns, and sensitivity thresholds are externalized to YAML configuration files. The toolkit ships with sample configurations in `examples/policies/` that must be reviewed and customized before production deployment. No built-in rule set should be considered exhaustive. Policy languages supported: YAML, OPA Rego, and Cedar.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;The kernel is stateless by design, each request carries its own context. This means you can deploy it behind a load balancer, as a sidecar container in Kubernetes, or in a serverless function, with no shared state to manage. On AKS or any Kubernetes cluster, it fits naturally into existing deployment patterns. Helm charts are available for agent-os, agent-mesh, and agent-sre.&lt;/P&gt;
&lt;H2&gt;Agent Mesh: Zero-Trust Identity for Agents&lt;/H2&gt;
&lt;P&gt;In service mesh architectures, services prove their identity via mTLS certificates before communicating. AgentMesh applies the same principle to AI agents using decentralized identifiers (DIDs) with Ed25519 cryptography and the Inter-Agent Trust Protocol (IATP):&lt;/P&gt;
&lt;P&gt;from agentmesh import AgentIdentity, TrustBridge&lt;BR /&gt;&lt;BR /&gt;identity = AgentIdentity.create(&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; name="data-analyst",&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; sponsor="alice@company.com",&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; # Human accountability&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; capabilities=["read:data", "write:reports"],&lt;BR /&gt;)&lt;BR /&gt;# identity.did -&amp;gt; "did:mesh:data-analyst:a7f3b2..."&lt;BR /&gt;&lt;BR /&gt;bridge = TrustBridge()&lt;BR /&gt;verification = await bridge.verify_peer(&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; peer_id="did:mesh:other-agent",&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; required_trust_score=700,&amp;nbsp; # Must score &amp;gt;= 700/1000&lt;BR /&gt;)&lt;/P&gt;
&lt;P&gt;A critical feature is&amp;nbsp;&lt;STRONG&gt;trust decay&lt;/STRONG&gt;: an agent's trust score decreases over time without positive signals. An agent trusted last week but silent since then gradually becomes untrusted, modeling the reality that trust requires ongoing demonstration, not a one-time grant.&lt;/P&gt;
&lt;P&gt;Delegation chains enforce &lt;STRONG&gt;scope narrowing&lt;/STRONG&gt;: a parent agent with read+write permissions can delegate only read access to a child agent, never escalate.&lt;/P&gt;
&lt;H2&gt;Agent Hypervisor: Execution Rings&lt;/H2&gt;
&lt;P&gt;CPU architectures use privilege rings (Ring 0 for kernel, Ring 3 for userspace) to isolate workloads. The Agent Hypervisor applies this model to AI agents:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Ring&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Trust Level&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Capabilities&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Ring 0 (Kernel)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Score ≥ 900&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Full system access, can modify policies&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Ring 1 (Supervisor)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Score ≥ 700&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Cross-agent coordination, elevated tool access&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Ring 2 (User)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Score ≥ 400&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Standard tool access within assigned scope&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Ring 3 (Untrusted)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Score &amp;lt; 400&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Read-only, sandboxed execution only&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 33.33%" /&gt;&lt;col style="width: 33.33%" /&gt;&lt;col style="width: 33.33%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;New and untrusted agents start in Ring 3 and earn their way up, exactly the principle of least privilege that production engineers apply to every other workload.&lt;/P&gt;
&lt;P&gt;Each ring enforces per-agent resource limits: maximum execution time, memory caps, CPU throttling, and request rate limits. If a Ring 2 agent attempts a Ring 1 operation, it gets blocked, just like a userspace process trying to access kernel memory.&lt;/P&gt;
&lt;P&gt;These ring definitions and their associated trust score thresholds are fully configurable via policy. Organizations can define custom ring structures, adjust the number of rings, set different trust score thresholds for transitions, and configure per-ring resource limits to match their security requirements.&lt;/P&gt;
&lt;P&gt;The hypervisor also provides&amp;nbsp;&lt;STRONG&gt;saga orchestration&lt;/STRONG&gt;&amp;nbsp;for multi-step operations. When an agent executes a sequence, draft email → send → update CRM, and the final step fails, compensating actions fire in reverse. Borrowed from distributed transaction patterns, this ensures multi-agent workflows maintain consistency even when individual steps fail.&lt;/P&gt;
&lt;H2&gt;Agent SRE: SLOs and Circuit Breakers for Agents&lt;/H2&gt;
&lt;P&gt;If you practice SRE, you measure services by SLOs and manage risk through error budgets. Agent SRE extends this to AI agents:&lt;/P&gt;
&lt;P&gt;When an agent's safety SLI drops below 99 percent, meaning more than 1 percent of its actions violate policy, the system automatically restricts the agent's capabilities until it recovers. This is the same error-budget model that SRE teams use for production services, applied to agent behavior.&lt;/P&gt;
&lt;P&gt;We also built nine chaos engineering fault injection templates: network delays, LLM provider failures, tool timeouts, trust score manipulation, memory corruption, and concurrent access races. Because the only way to know if your agent system is resilient is to break it intentionally.&lt;/P&gt;
&lt;P&gt;Agent SRE integrates with your existing observability stack through adapters for Datadog, PagerDuty, Prometheus, OpenTelemetry, Langfuse, LangSmith, Arize, MLflow, and more. Message broker adapters support Kafka, Redis, NATS, Azure Service Bus, AWS SQS, and RabbitMQ.&lt;/P&gt;
&lt;H2&gt;Compliance and Observability&lt;/H2&gt;
&lt;P&gt;If your organization already maps to CIS Benchmarks, NIST AI RMF, or other frameworks for infrastructure compliance, the OWASP Agentic Top 10 is the equivalent standard for AI agent workloads. The toolkit's agent-compliance package provides automated governance grading against these frameworks.&lt;/P&gt;
&lt;P&gt;The toolkit is framework-agnostic, with 20+ adapters that hook into each framework's native extension points, so adding governance to an existing agent is typically a few lines of configuration, not a rewrite.&lt;/P&gt;
&lt;P&gt;The toolkit exports metrics to any OpenTelemetry-compatible platform, Prometheus, Grafana, Datadog, Arize, or Langfuse. If you're already running an observability stack for your infrastructure, agent governance metrics flow through the same pipeline.&lt;/P&gt;
&lt;P&gt;Key metrics include: policy decisions per second, trust score distributions, ring transitions, SLO burn rates, circuit breaker state, and governance workflow latency.&lt;/P&gt;
&lt;H2&gt;Getting Started&lt;/H2&gt;
&lt;P&gt;# Install all packages&lt;BR /&gt;pip install agent-governance-toolkit[full]&lt;BR /&gt;&lt;BR /&gt;# Or individual packages&lt;BR /&gt;pip install agent-os-kernel agent-mesh agent-sre&lt;/P&gt;
&lt;P&gt;The toolkit is available across language ecosystems: Python, TypeScript (`@microsoft/agentmesh-sdk` on npm), Rust, Go, and .NET (`Microsoft.AgentGovernance` on NuGet).&lt;/P&gt;
&lt;H2&gt;Azure Integrations&lt;/H2&gt;
&lt;P&gt;While the toolkit is platform-agnostic, we've included integrations that help enable the fastest path to production, on Azure:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Azure Kubernetes Service (AKS):&lt;/STRONG&gt; Deploy the policy engine as a sidecar container alongside your agents. Helm charts provide production-ready manifests for agent-os, agent-mesh, and agent-sre.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Azure AI Foundry Agent Service:&lt;/STRONG&gt; Use the built-in middleware integration for agents deployed through Azure AI Foundry.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;OpenClaw Sidecar:&lt;/STRONG&gt; One compelling deployment scenario is running&amp;nbsp;&lt;A class="lia-external-url" href="https://github.com/openclaw" target="_blank"&gt;OpenClaw&lt;/A&gt;, the open-source autonomous agent, inside a container with the Agent Governance Toolkit deployed as a sidecar. This gives you policy enforcement, identity verification, and SLO monitoring over OpenClaw's autonomous operations. On Azure Kubernetes Service (AKS), the deployment is a standard pod with two containers: OpenClaw as the primary workload and the governance toolkit as the sidecar, communicating over localhost. We have a reference architecture and&amp;nbsp;&lt;A class="lia-external-url" href="https://aka.ms/agt-helm" target="_blank"&gt;Helm chart available in the repository&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;The same sidecar pattern works with any containerized agent, OpenClaw is a particularly compelling example because of the interest in autonomous agent safety.&lt;/P&gt;
&lt;H2&gt;Tutorials and Resources&lt;/H2&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://aka.ms/agt-tutorials" target="_blank"&gt;34+ step-by-step tutorials&lt;/A&gt; covering policy engines, trust, compliance, MCP security, observability, and cross-platform SDK usage are available in the repository.&lt;/P&gt;
&lt;P&gt;git clone https://github.com/microsoft/agent-governance-toolkit&lt;BR /&gt;cd agent-governance-toolkit&lt;BR /&gt;pip install -e "packages/agent-os[dev]" -e "packages/agent-mesh[dev]" -e "packages/agent-sre[dev]"&lt;BR /&gt;&lt;BR /&gt;# Run the demo&lt;BR /&gt;python -m agent_os.demo&lt;/P&gt;
&lt;H2&gt;What's Next&lt;/H2&gt;
&lt;P&gt;AI agents are becoming autonomous decision-makers in production infrastructure, executing code, managing databases, and orchestrating services. The security patterns that kept production systems safe for decades, least privilege, mandatory access controls, process isolation, audit logging, are exactly what these new workloads need. We built them. They're open source.&lt;/P&gt;
&lt;P&gt;We're building this in the open because agent security is too important for any single organization to solve alone:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Security research&lt;/STRONG&gt;: Adversarial testing, red-team results, and vulnerability reports strengthen the toolkit for everyone.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Community contributions&lt;/STRONG&gt;: Framework adapters, detection rules, and compliance mappings from the community expand coverage across ecosystems.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;We are committed to open governance. We're releasing this project under Microsoft today, and we aspire to move it into a foundation home, such as the AI and Data Foundation (AAIF), where it can benefit from cross-industry stewardship. We're actively engaging with foundation partners on this path.&lt;/P&gt;
&lt;P&gt;The Agent Governance Toolkit is open source under the MIT license. Contributions welcome at&amp;nbsp;&lt;A class="lia-external-url" href="https://aka.ms/agent-governance-toolkit" target="_blank"&gt;github.com/microsoft/agent-governance-toolkit&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Fri, 10 Apr 2026 04:55:22 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/agent-governance-toolkit-architecture-deep-dive-policy-engines/ba-p/4510105</guid>
      <dc:creator>mosiddi</dc:creator>
      <dc:date>2026-04-10T04:55:22Z</dc:date>
    </item>
    <item>
      <title>DPDK 25.11 Performance on Azure for High-Speed Packet Workloads</title>
      <link>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/dpdk-25-11-performance-on-azure-for-high-speed-packet-workloads/ba-p/4424905</link>
      <description>&lt;P&gt;At Microsoft Azure, performance is treated as an ongoing discipline grounded in careful engineering and real-world validation. As cloud workloads grow in scale and variety, customers depend on consistent, high-throughput networking. Technologies such as the Data Plane Development Kit (DPDK) play a key role in meeting these expectations&lt;/P&gt;
&lt;P&gt;To support customers running advanced network functions, we’ve released our latest performance report based on DPDK 25.11. It is now available in the DPDK performance catalog (&lt;A href="https://fast.dpdk.org/doc/perf/DPDK_25_11_Microsoft_NIC_performance_report.pdf" target="_blank" rel="noopener"&gt;Microsoft Azure DPDK Performance Report)&lt;/A&gt;. The report provides a clear view of how DPDK performs on Microsoft-developed Azure Boost within Azure infrastructure, with detailed insights into packet processing across a range of scenarios, from small packet sizes to multi-core scaling.&lt;/P&gt;
&lt;H4&gt;Why We Test DPDK on Azure&lt;/H4&gt;
&lt;P&gt;DPDK is widely used for high-performance packet processing in virtualized environments. It powers a range of workloads from customer-deployed virtual network functions to internal Azure network appliances.&lt;/P&gt;
&lt;P&gt;But simply enabling DPDK is not enough. To ensure optimal performance, we validate it under realistic conditions, including:&lt;/P&gt;
&lt;UL data-spread="false"&gt;
&lt;LI&gt;Azure VM configurations with Accelerated Networking&lt;/LI&gt;
&lt;LI&gt;NUMA-aware memory and CPU alignment&lt;/LI&gt;
&lt;LI&gt;Hugepage-backed memory allocation&lt;/LI&gt;
&lt;LI&gt;Multi-core PMD thread scaling&lt;/LI&gt;
&lt;LI&gt;Packet forwarding using real traffic generators&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This helps us understand how DPDK performs in actual cloud environments, not just idealized lab setups.&lt;/P&gt;
&lt;H4&gt;What the Report Covers&lt;/H4&gt;
&lt;P&gt;The DPDK 25.11 report includes performance benchmarks across different frame sizes, ranging from 64 bytes to 1518 bytes. It also evaluates CPU usage, queue configuration, and latency stability across various test conditions.&lt;/P&gt;
&lt;P&gt;Key Report Highlights:&lt;/P&gt;
&lt;UL data-spread="false"&gt;
&lt;LI&gt;Line-rate throughput is achievable at common frame sizes when vCPUs are pinned correctly and memory is properly configured&lt;/LI&gt;
&lt;LI&gt;Low jitter and consistent latency are observed across multi-queue and multi-core tests&lt;/LI&gt;
&lt;LI&gt;Performance scales nearly linearly with additional cores, especially for smaller packet sizes&lt;/LI&gt;
&lt;LI&gt;Queue and PMD thread alignment with the NUMA layout plays a critical role in maximizing efficiency&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;All tests were performed using Azure VM SKUs equipped with Microsoft NICs and configured for optimal isolation and performance.&lt;/P&gt;
&lt;H4&gt;Why We Shared This with the Community&lt;/H4&gt;
&lt;P&gt;Publishing this report reflects our commitment to open engineering and ecosystem collaboration. We believe performance transparency benefits everyone in the ecosystem, including developers, operators, and customers.&lt;/P&gt;
&lt;P&gt;Here are a few reasons why we share:&lt;/P&gt;
&lt;UL data-spread="false"&gt;
&lt;LI&gt;It helps customers plan and tune their workloads using validated performance envelopes&lt;/LI&gt;
&lt;LI&gt;It enables vendors and contributors to optimize drivers, firmware, and applications based on real-world data&lt;/LI&gt;
&lt;LI&gt;It encourages reproducibility and standardization in cloud DPDK benchmarking&lt;/LI&gt;
&lt;LI&gt;It creates a feedback loop between Azure, the DPDK community, and our partners&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Our goal is not just to test internally but to foster open dialogue and measurable improvement across platforms.&lt;/P&gt;
&lt;H4&gt;Recommendations for Running DPDK on Azure&lt;/H4&gt;
&lt;P&gt;Based on the test results, we offer the following best practices for customers deploying DPDK-based applications:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="width: 73.3333%; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;th&gt;Area&lt;/th&gt;&lt;th&gt;Recommendation&lt;/th&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;VM Selection&lt;/td&gt;&lt;td&gt;Choose Accelerated Networking-enabled SKUs like D, Fsv2, or Eav4&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;CPU Pinning&lt;/td&gt;&lt;td&gt;Use dedicated cores for PMD threads and align with NUMA topology&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Memory&lt;/td&gt;&lt;td&gt;Configure hugepages and allocate memory from the local NUMA node&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Queue Mapping&lt;/td&gt;&lt;td&gt;Match RX and TX queues to available vCPUs to avoid contention&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Packet Generator&lt;/td&gt;&lt;td&gt;Use pktgen-dpdk or testpmd with controlled traffic profiles&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 24.7602%" /&gt;&lt;col style="width: 75.2594%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;These settings can significantly improve consistency and peak throughput across many DPDK scenarios.&lt;/P&gt;
&lt;H4&gt;Get Involved and Reproduce the Results&lt;/H4&gt;
&lt;P&gt;We invite you to read the full report and try the configurations in your own environment. Whether you are running a firewall, a router, or a telemetry appliance, DPDK on Azure offers scalable performance with the right tuning.&lt;/P&gt;
&lt;P&gt;You can:&lt;/P&gt;
&lt;UL data-spread="false"&gt;
&lt;LI&gt;Download the report at &lt;A href="https://fast.dpdk.org/doc/perf/DPDK_25_11_Microsoft_NIC_performance_report.pdf" target="_blank" rel="noopener"&gt;Microsoft Azure DPDK Performance Report&amp;nbsp;&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Replicate the test setup using Azure VMs and your preferred packet generator &lt;A href="https://github.com/mcgov/dpdk-perf" target="_blank" rel="noopener" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}"&gt;github.com/mcgov/dpdk-perf&lt;/A&gt;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Share your feedback with us through GitHub or community channels or send feedback&amp;nbsp;&lt;A href="mailto:dpdk@microsoft.com" target="_blank" rel="noopener" data-tabster="{&amp;quot;restorer&amp;quot;:{&amp;quot;type&amp;quot;:1}}"&gt;dpdk@microsoft.com&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Suggest improvements or contribute new scenarios to future performance reports&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Conclusion&lt;/H3&gt;
&lt;P&gt;DPDK is a powerful enabler of high-performance networking in the cloud. With this report, we aim to make Azure performance data open, useful, and actionable. It reflects our ongoing investment in validating and improving the underlying infrastructure that supports mission-critical workloads.&lt;/P&gt;
&lt;P&gt;We thank the DPDK community for ongoing collaboration. We look forward to continued engagement as we scale performance transparency in cloud-native environments.&lt;/P&gt;</description>
      <pubDate>Wed, 01 Apr 2026 18:37:04 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/dpdk-25-11-performance-on-azure-for-high-speed-packet-workloads/ba-p/4424905</guid>
      <dc:creator>KashanK</dc:creator>
      <dc:date>2026-04-01T18:37:04Z</dc:date>
    </item>
    <item>
      <title>Run OpenClaw Agents on Azure Linux VMs (with Secure Defaults)</title>
      <link>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/run-openclaw-agents-on-azure-linux-vms-with-secure-defaults/ba-p/4502944</link>
      <description>&lt;P data-source-line="7"&gt;Many teams want an enterprise-ready personal AI assistant, but they need it on infrastructure they control, with security boundaries they can explain to IT. That is exactly where OpenClaw fits on Azure.&lt;/P&gt;
&lt;P data-source-line="9"&gt;&lt;STRONG&gt;OpenClaw is a self-hosted, always-on personal agent runtime you run in your enterprise environment and Azure infrastructure.&lt;/STRONG&gt; Instead of relying only on a hosted chat app from a third-party provider, you can deploy, operate, and experiment with &lt;STRONG&gt;an agent on an Azure Linux VM you control&lt;/STRONG&gt; — &lt;STRONG&gt;using your existing GitHub Copilot licenses, Azure OpenAI deployments, or API plans from OpenAI, Anthropic Claude, Google Gemini, and other model providers you already subscribe to&lt;/STRONG&gt;. Once deployed on Azure, you can interact with an OpenClaw agent through familiar channels like Microsoft Teams, Slack, Telegram, WhatsApp, and many more!&lt;/P&gt;
&lt;P data-source-line="11"&gt;For Azure users, this gives you a practical middle ground: modern personal-agent workflows on familiar Azure infrastructure.&lt;/P&gt;
&lt;H2 data-source-line="13"&gt;What is OpenClaw, and how is it different from ChatGPT/Claude/chat apps?&lt;/H2&gt;
&lt;P data-source-line="15"&gt;OpenClaw is a self-hosted personal agent runtime that can be hosted on Azure compute infrastructure.&lt;/P&gt;
&lt;P data-source-line="17"&gt;How it differs:&lt;/P&gt;
&lt;UL data-source-line="19"&gt;
&lt;LI data-source-line="19"&gt;&lt;STRONG&gt;ChatGPT/Claude apps&lt;/STRONG&gt;&amp;nbsp;are primarily hosted chat experiences tied to one provider's models&lt;/LI&gt;
&lt;LI data-source-line="20"&gt;&lt;STRONG&gt;OpenClaw&lt;/STRONG&gt;&amp;nbsp;is an always-on runtime you operate yourself, backed by&amp;nbsp;&lt;STRONG&gt;your choice of model provider&lt;/STRONG&gt;&amp;nbsp;— GitHub Copilot, Azure OpenAI, OpenAI, Anthropic Claude, Google Gemini, and others&lt;/LI&gt;
&lt;LI data-source-line="21"&gt;&lt;STRONG&gt;OpenClaw&lt;/STRONG&gt; lets you keep the runtime boundary in your own Azure VM environment within your Azure enterprise subscription&lt;/LI&gt;
&lt;/UL&gt;
&lt;P data-source-line="23"&gt;In practice, OpenClaw is useful when you want a persistent assistant for operational and workflow tasks, with your own infrastructure as the control point. You bring whatever model provider and API plan you already have — OpenClaw connects to it.&lt;/P&gt;
&lt;H2 data-source-line="25"&gt;Why Azure Linux VMs?&lt;/H2&gt;
&lt;P data-source-line="27"&gt;Azure Linux VMs are a strong fit because they provide:&lt;/P&gt;
&lt;UL data-source-line="29"&gt;
&lt;LI data-source-line="29"&gt;&lt;STRONG&gt;A suitable host machine for the OpenClaw agent to run on&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI data-source-line="29"&gt;Enterprise-friendly infrastructure and identity workflows&lt;/LI&gt;
&lt;LI data-source-line="30"&gt;Repeatable provisioning via the Azure CLI&lt;/LI&gt;
&lt;LI data-source-line="31"&gt;Network hardening with NSG rules&lt;/LI&gt;
&lt;LI data-source-line="32"&gt;Managed SSH access through Azure Bastion instead of public SSH exposure&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2 data-source-line="34"&gt;How to Set Up OpenClaw on an Azure Linux VM&lt;/H2&gt;
&lt;P data-source-line="36"&gt;This guide sets up an Azure Linux VM, applies NSG (Network Security Group) hardening, configures Azure Bastion for managed SSH access, and &lt;STRONG&gt;installs an always-on OpenClaw agent within the VM&lt;/STRONG&gt; that you can interact with through various messaging channels.&lt;/P&gt;
&lt;H3 data-source-line="38"&gt;What you'll do&lt;/H3&gt;
&lt;UL data-source-line="40"&gt;
&lt;LI data-source-line="40"&gt;Create Azure networking (VNet, subnets, NSG) and compute resources with the Azure CLI&lt;/LI&gt;
&lt;LI data-source-line="41"&gt;Apply Network Security Group rules so VM SSH is allowed only from Azure Bastion&lt;/LI&gt;
&lt;LI data-source-line="42"&gt;Use Azure Bastion for SSH access (no public IP on the VM)&lt;/LI&gt;
&lt;LI data-source-line="43"&gt;Install OpenClaw on the Azure VM&lt;/LI&gt;
&lt;LI data-source-line="44"&gt;Verify OpenClaw installation and configuration on the VM&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 data-source-line="46"&gt;What you need&lt;/H3&gt;
&lt;UL data-source-line="48"&gt;
&lt;LI data-source-line="48"&gt;An Azure subscription with permission to create compute and network resources&lt;/LI&gt;
&lt;LI data-source-line="49"&gt;Azure CLI installed (&lt;A href="https://learn.microsoft.com/cli/azure/install-azure-cli" target="_blank"&gt;install steps&lt;/A&gt;)&lt;/LI&gt;
&lt;LI data-source-line="50"&gt;An SSH key pair (the guide covers generating one if needed)&lt;/LI&gt;
&lt;LI data-source-line="51"&gt;~20–30 minutes&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2 data-source-line="53"&gt;Configure deployment&lt;/H2&gt;
&lt;H3 data-source-line="55"&gt;Step 1: Sign in to Azure CLI&lt;/H3&gt;
&lt;LI-CODE lang="bash"&gt;az login                     # Select a suitable Azure subscription during Azure login
az extension add -n ssh      # SSH extension is required for Azure Bastion SSH&lt;/LI-CODE&gt;
&lt;P data-source-line="62"&gt;The ssh extension is required for Azure Bastion native SSH tunneling.&lt;/P&gt;
&lt;H3 data-source-line="64"&gt;Step 2: Register required resource providers (one-time)&lt;/H3&gt;
&lt;P&gt;Register required Azure Resource Providers (one time registration):&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;az provider register --namespace Microsoft.Compute
az provider register --namespace Microsoft.Network&lt;/LI-CODE&gt;
&lt;P data-source-line="71"&gt;Verify registration. Wait until both show &lt;EM&gt;Registered&lt;/EM&gt;.&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;az provider show --namespace Microsoft.Compute --query registrationState -o tsv
az provider show --namespace Microsoft.Network --query registrationState -o tsv&lt;/LI-CODE&gt;
&lt;H3 data-source-line="78"&gt;Step 3: Set deployment variables&lt;/H3&gt;
&lt;P&gt;Set the deployment environment variables that will be needed throughout this guide.&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;RG="rg-openclaw"
LOCATION="westus2"
VNET_NAME="vnet-openclaw"
VNET_PREFIX="10.40.0.0/16"
VM_SUBNET_NAME="snet-openclaw-vm"
VM_SUBNET_PREFIX="10.40.2.0/24"
BASTION_SUBNET_PREFIX="10.40.1.0/26"
NSG_NAME="nsg-openclaw-vm"
VM_NAME="vm-openclaw"
ADMIN_USERNAME="openclaw"
BASTION_NAME="bas-openclaw"
BASTION_PIP_NAME="pip-openclaw-bastion"&lt;/LI-CODE&gt;
&lt;P data-source-line="95"&gt;Adjust names and CIDR ranges to fit your environment. The Bastion subnet must be at least &lt;EM&gt;/26&lt;/EM&gt;.&lt;/P&gt;
&lt;H3 data-source-line="97"&gt;Step 4: Select SSH key&lt;/H3&gt;
&lt;P data-source-line="99"&gt;Use your existing public key if you have one:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;SSH_PUB_KEY="$(cat ~/.ssh/id_ed25519.pub)"&lt;/LI-CODE&gt;
&lt;P data-source-line="105"&gt;If you don't have an SSH key yet, generate one:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;ssh-keygen -t ed25519 -a 100 -f ~/.ssh/id_ed25519 -C "you@example.com"
SSH_PUB_KEY="$(cat ~/.ssh/id_ed25519.pub)"&lt;/LI-CODE&gt;
&lt;H3 data-source-line="112"&gt;Step 5: Select VM size and OS disk size&lt;/H3&gt;
&lt;LI-CODE lang="bash"&gt;VM_SIZE="Standard_B2as_v2"
OS_DISK_SIZE_GB=64&lt;/LI-CODE&gt;
&lt;P data-source-line="119"&gt;Choose a VM size and OS disk size available in your subscription and region:&lt;/P&gt;
&lt;UL data-source-line="121"&gt;
&lt;LI data-source-line="121"&gt;Start smaller for light usage and scale up later&lt;/LI&gt;
&lt;LI data-source-line="122"&gt;Use more vCPU/RAM/disk for heavier automation, more channels, or larger model/tool workloads&lt;/LI&gt;
&lt;LI data-source-line="123"&gt;If a VM size is unavailable in your region or subscription quota, pick the closest available SKU&lt;/LI&gt;
&lt;/UL&gt;
&lt;P data-source-line="125"&gt;List VM sizes available in your target region:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;az vm list-skus --location "${LOCATION}" --resource-type virtualMachines -o table&lt;/LI-CODE&gt;
&lt;P data-source-line="131"&gt;Check your current vCPU and disk usage/quota:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;az vm list-usage --location "${LOCATION}" -o table&lt;/LI-CODE&gt;
&lt;H2 data-source-line="137"&gt;Deploy Azure resources&lt;/H2&gt;
&lt;H3 data-source-line="139"&gt;Step 1: Create the resource group&lt;/H3&gt;
&lt;P&gt;The Azure resource group will contain all of the Azure resources that the OpenClaw agent needs.&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;az group create -n "${RG}" -l "${LOCATION}"&lt;/LI-CODE&gt;
&lt;H3 data-source-line="145"&gt;Step 2: Create the network security group&lt;/H3&gt;
&lt;P data-source-line="147"&gt;Create the NSG and add rules so only the Bastion subnet can SSH into the VM.&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;az network nsg create \
  -g "${RG}" -n "${NSG_NAME}" -l "${LOCATION}"

# Allow SSH from the Bastion subnet only
az network nsg rule create \
  -g "${RG}" --nsg-name "${NSG_NAME}" \
  -n AllowSshFromBastionSubnet --priority 100 \
  --access Allow --direction Inbound --protocol Tcp \
  --source-address-prefixes "${BASTION_SUBNET_PREFIX}" \
  --destination-port-ranges 22

# Deny SSH from the public internet
az network nsg rule create \
  -g "${RG}" --nsg-name "${NSG_NAME}" \
  -n DenyInternetSsh --priority 110 \
  --access Deny --direction Inbound --protocol Tcp \
  --source-address-prefixes Internet \
  --destination-port-ranges 22

# Deny SSH from other VNet sources
az network nsg rule create \
  -g "${RG}" --nsg-name "${NSG_NAME}" \
  -n DenyVnetSsh --priority 120 \
  --access Deny --direction Inbound --protocol Tcp \
  --source-address-prefixes VirtualNetwork \
  --destination-port-ranges 22&lt;/LI-CODE&gt;
&lt;P data-source-line="178"&gt;The rules are evaluated by priority (lowest number first): Bastion traffic is allowed at 100, then all other SSH is blocked at 110 and 120.&lt;/P&gt;
&lt;H3 data-source-line="180"&gt;Step 3: Create the virtual network and subnets&lt;/H3&gt;
&lt;P data-source-line="182"&gt;Create the VNet with the VM subnet (NSG attached), then add the Bastion subnet.&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;az network vnet create \
  -g "${RG}" -n "${VNET_NAME}" -l "${LOCATION}" \
  --address-prefixes "${VNET_PREFIX}" \
  --subnet-name "${VM_SUBNET_NAME}" \
  --subnet-prefixes "${VM_SUBNET_PREFIX}"

# Attach the NSG to the VM subnet
az network vnet subnet update \
  -g "${RG}" --vnet-name "${VNET_NAME}" \
  -n "${VM_SUBNET_NAME}" --nsg "${NSG_NAME}"

# AzureBastionSubnet — name is required by Azure
az network vnet subnet create \
  -g "${RG}" --vnet-name "${VNET_NAME}" \
  -n AzureBastionSubnet \
  --address-prefixes "${BASTION_SUBNET_PREFIX}"&lt;/LI-CODE&gt;
&lt;H3 data-source-line="203"&gt;Step 4: Create the Virtual Machine&lt;/H3&gt;
&lt;P data-source-line="205"&gt;Create the VM with no public IP. SSH access for OpenClaw configuration will be exclusively through Azure Bastion.&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;az vm create \
  -g "${RG}" -n "${VM_NAME}" -l "${LOCATION}" \
  --image "Canonical:ubuntu-24_04-lts:server:latest" \
  --size "${VM_SIZE}" \
  --os-disk-size-gb "${OS_DISK_SIZE_GB}" \
  --storage-sku StandardSSD_LRS \
  --admin-username "${ADMIN_USERNAME}" \
  --ssh-key-values "${SSH_PUB_KEY}" \
  --vnet-name "${VNET_NAME}" \
  --subnet "${VM_SUBNET_NAME}" \
  --public-ip-address "" \
  --nsg ""&lt;/LI-CODE&gt;
&lt;P data-source-line="222"&gt;&lt;EM&gt;--public-ip-address "" &lt;/EM&gt;prevents a public IP from being assigned.&lt;/P&gt;
&lt;P data-source-line="222"&gt;&lt;EM&gt;--nsg "" &lt;/EM&gt;skips creating a per-NIC NSG (the subnet-level NSG created earlier handles security).&lt;/P&gt;
&lt;P data-source-line="248"&gt;&lt;STRONG&gt;Reproducibility:&lt;/STRONG&gt; The command above uses latest for the &lt;EM&gt;Ubuntu image&lt;/EM&gt;. To pin a specific version, list available versions and replace &lt;EM&gt;latest&lt;/EM&gt;:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;az vm image list \
  --publisher Canonical --offer ubuntu-24_04-lts \
  --sku server --all -o table&lt;/LI-CODE&gt;
&lt;H3 data-source-line="232"&gt;Step 5: Create Azure Bastion&lt;/H3&gt;
&lt;P data-source-line="234"&gt;Azure Bastion provides secure-managed SSH access to the VM without exposing a public IP.&lt;/P&gt;
&lt;P data-source-line="234"&gt;Bastion Standard SKU with tunneling is required for CLI-based "az network bastion ssh" command.&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;az network public-ip create \
  -g "${RG}" -n "${BASTION_PIP_NAME}" -l "${LOCATION}" \
  --sku Standard --allocation-method Static

az network bastion create \
  -g "${RG}" -n "${BASTION_NAME}" -l "${LOCATION}" \
  --vnet-name "${VNET_NAME}" \
  --public-ip-address "${BASTION_PIP_NAME}" \
  --sku Standard --enable-tunneling true&lt;/LI-CODE&gt;
&lt;P data-source-line="248"&gt;Bastion provisioning typically takes 5–10 minutes but can take up to 15–30 minutes in some regions.&lt;/P&gt;
&lt;H3 data-source-line="248"&gt;Step 6: Verify Deployments&lt;/H3&gt;
&lt;P data-source-line="248"&gt;After all resources are deployed, your resource group should look like the following:&lt;/P&gt;
&lt;img /&gt;
&lt;H2 data-source-line="250"&gt;Install OpenClaw&lt;/H2&gt;
&lt;H3 data-source-line="252"&gt;Step 1: SSH into the VM through Azure Bastion&lt;/H3&gt;
&lt;LI-CODE lang="bash"&gt;VM_ID="$(az vm show -g "${RG}" -n "${VM_NAME}" --query id -o tsv)"

az network bastion ssh \
  --name "${BASTION_NAME}" \
  --resource-group "${RG}" \
  --target-resource-id "${VM_ID}" \
  --auth-type ssh-key \
  --username "${ADMIN_USERNAME}" \
  --ssh-key ~/.ssh/id_ed25519&lt;/LI-CODE&gt;
&lt;H3 data-source-line="266"&gt;Step 2: Install OpenClaw (in the Bastion SSH shell)&lt;/H3&gt;
&lt;LI-CODE lang="bash"&gt;curl -fsSL https://openclaw.ai/install.sh | bash&lt;/LI-CODE&gt;
&lt;P data-source-line="274"&gt;The installer installs Node LTS and dependencies if not already present, installs OpenClaw, and launches the &lt;STRONG&gt;OpenClaw onboarding wizard&lt;/STRONG&gt;. For more information, see the &lt;A class="lia-external-url" href="https://docs.openclaw.ai/install" target="_blank"&gt;open source OpenClaw install docs&lt;/A&gt;.&lt;/P&gt;
&lt;H4 data-source-line="274"&gt;OpenClaw Onboarding: Choosing an AI Model Provider&lt;/H4&gt;
&lt;P data-source-line="274"&gt;During OpenClaw onboarding, you'll choose the &lt;STRONG&gt;AI &lt;/STRONG&gt;&lt;STRONG&gt;model provider for the OpenClaw agent&lt;/STRONG&gt;. This can be&amp;nbsp;GitHub Copilot, Azure OpenAI, OpenAI, Anthropic Claude, Google Gemini, or another supported provider. See the &lt;A class="lia-external-url" href="https://docs.openclaw.ai/install" target="_blank"&gt;open source OpenClaw install docs&lt;/A&gt; for details on choosing an AI model provider when going through the onboarding wizard.&lt;/P&gt;
&lt;P data-source-line="284"&gt;Most enterprise Azure teams already have GitHub Copilot licenses. If that is your case, we recommend choosing the GitHub Copilot provider in the OpenClaw onboarding wizard. See the &lt;A class="lia-external-url" href="https://docs.openclaw.ai/providers/github-copilot" target="_blank"&gt;open source OpenClaw docs on configuring &lt;STRONG&gt;GitHub Copilot as the AI model provider&lt;/STRONG&gt;&lt;/A&gt;.&lt;/P&gt;
&lt;H4 data-source-line="284"&gt;OpenClaw Onboarding: Setting up Messaging Channels&lt;/H4&gt;
&lt;P&gt;During OpenClaw onboarding, there will be an optional step where you can set up various messaging channels to interact with your OpenClaw agent.&lt;/P&gt;
&lt;P&gt;For first time users, we recommend setting up Telegram due to ease of setup. Other messaging channels such as Microsoft Teams, Slack, WhatsApp, and others can also be set up.&lt;/P&gt;
&lt;P&gt;To configure &lt;STRONG&gt;OpenClaw for messaging through chat channels&lt;/STRONG&gt;, see the &lt;A class="lia-external-url" href="https://docs.openclaw.ai/channels" target="_blank"&gt;open source OpenClaw chat channels docs&lt;/A&gt;.&lt;/P&gt;
&lt;H3 data-source-line="276"&gt;Step 3: Verify OpenClaw Configuration&lt;/H3&gt;
&lt;P data-source-line="278"&gt;To validate that everything was set up correctly, run the following commands within the same Bastion SSH session:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;openclaw status
openclaw gateway status&lt;/LI-CODE&gt;&lt;img /&gt;
&lt;P&gt;If there are any issues reported, you can run the onboarding wizard again with the steps above. Alternatively, you can run the following command:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;openclaw doctor&lt;/LI-CODE&gt;
&lt;H2&gt;Message OpenClaw&lt;/H2&gt;
&lt;P&gt;Once you have configured the OpenClaw agent to be reachable via various messaging channels, you can verify that it is responsive by messaging it.&lt;/P&gt;
&lt;img /&gt;
&lt;H2&gt;Enhancing OpenClaw for Use Cases&lt;/H2&gt;
&lt;P&gt;There you go! You now have a 24/7, always-on personal AI agent, living on its own Azure VM environment.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;For awesome OpenClaw use cases, check out the &lt;A class="lia-external-url" href="https://github.com/hesamsheikh/awesome-openclaw-usecases" target="_blank"&gt;awesome-openclaw-usecases repository&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;To enhance your OpenClaw agent with additional AI skills so that it can autonomously perform multi-step operations on any domain, check out the&amp;nbsp;&lt;A class="lia-external-url" href="https://github.com/VoltAgent/awesome-openclaw-skills" target="_blank"&gt;awesome-openclaw-skills repository&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;You can also check out &lt;A class="lia-external-url" href="https://clawhub.ai/" target="_blank"&gt;ClawHub&lt;/A&gt;&amp;nbsp;and &lt;A class="lia-external-url" href="https://clawskills.sh/" target="_blank"&gt;ClawSkills&lt;/A&gt;, two popular open source skills directories that can enhance your OpenClaw agent.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Cleanup&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-as="p"&gt;To delete all resources created by this guide:&lt;/SPAN&gt;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;az group delete -n "${RG}" --yes --no-wait&lt;/LI-CODE&gt;
&lt;P&gt;&lt;SPAN data-as="p"&gt;This removes the resource group and everything inside it (VM, VNet, NSG, Bastion, public IP). &lt;/SPAN&gt;&lt;SPAN data-as="p"&gt;This also deletes the OpenClaw agent running within the VM.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;If you'd like to dive deeper about deploying OpenClaw on Azure, please check out the &lt;A class="lia-external-url" href="https://docs.openclaw.ai/install/azure" target="_blank"&gt;open source OpenClaw on Azure docs&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Sun, 22 Mar 2026 16:34:46 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/run-openclaw-agents-on-azure-linux-vms-with-secure-defaults/ba-p/4502944</guid>
      <dc:creator>johnsonshi_msft</dc:creator>
      <dc:date>2026-03-22T16:34:46Z</dc:date>
    </item>
    <item>
      <title>How Netstar Streamlined Fleet Monitoring and Reduced Custom Integrations with Drasi</title>
      <link>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/how-netstar-streamlined-fleet-monitoring-and-reduced-custom/ba-p/4499592</link>
      <description>&lt;P&gt;When a high-value container goes silent between waystations, logistics teams lose critical visibility, risking delays that can cascade into port congestion and missed connections. &lt;A href="https://www.netstar.co.za/" target="_blank"&gt;Netstar&lt;/A&gt;, a connected fleet solutions provider supporting customers like Maersk, faced this challenge as its operations scaled. Timely notifications of delays, arrivals, and status changes became critical to keeping cargo moving efficiently through port systems.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;BR /&gt;To address growing integration complexity and the need for real-time responsiveness, Netstar adopted &lt;A href="https://drasi.io/" target="_blank"&gt;Drasi&lt;/A&gt;. Drasi, built for change-driven solutions, provides continuously updated query results and automated reactions to data changes, enabling systems to detect and respond to critical changes as they happen. This shift to Drasi became foundational to how Netstar unified its fleet data, reduced engineering overhead, and improved monitoring workflows.&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;The Fragmentation Challenge&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;Growing operational complexity made an underlying challenge increasingly apparent. Tracking a container's journey from pickup to port terminal required reconciling data such as vehicle identifiers, waypoints, GPS location feeds, and IoT telemetry signals from siloed systems. With each new operational or business requirement, whether monitoring vehicle health or detecting route deviations, development teams found themselves repeatedly rebuilding similar patterns.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;"We were essentially rebuilding the same integration architecture for every use case," &lt;/EM&gt;explains Daniel Joubert, General Manager and technical lead at Netstar&lt;EM&gt;. "One week we'd build a dashboard for location tracking. The next week, we'd build another one for breakdown detection. The engineering overhead was unsustainable."&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Batch-based processing compounded the issue. Critical signals such as missed health reports or route delays can surface long after they occur, potentially limiting Netstar’s ability to take timely action.&lt;/P&gt;
&lt;H4&gt;&lt;SPAN class="lia-text-color-15"&gt;&lt;STRONG&gt;Introducing Drasi for Change-driven Architecture&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;Rather than continue building point solutions, Netstar adopted Drasi as the backbone of its real-time data architecture. Drasi simplifies systems that must detect, evaluate, and react to data changes quickly and efficiently at scale, aligning directly with Netstar’s needs.&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;A Unified, Continuously Updated View &lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;Drasi connected directly to Netstar's existing data sources- Azure SQL databases for information such as vehicle identifiers and waypoints, and Azure EventHub for GPS location data and IoT telemetry. Drasi Continuous Queries joined this information into a single, always-current operational picture. Instead of multiple custom-built pipelines, Netstar gained a single source of truth for its fleet.&lt;/P&gt;
&lt;P&gt;Using Drasi Reactions, Netstar defined actions that trigger when specific events occur. When a truck fails to send a health signal within its expected window, or when a delay notification indicates potential supply chain disruption, the system responds immediately without human intervention, reducing the likelihood of missed events.&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;Improvements Enabled by Drasi&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;Using the Drasi plugin for Grafana, Netstar consolidated results from Continuous Queries into one monitoring interface. Operators no longer reconciled conflicting views across separate tools; they now track vehicle health, location, alerts, and route deviations in real time from a single dashboard.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;"The transformation was remarkable," &lt;/EM&gt;says Dustyn Lightfoot, Solution Architect. &lt;EM&gt;"We were able to use a single Drasi instance to support multiple business use cases without building new infrastructure or writing additional code, for example, to stand up Blazor websites. More importantly, it eliminated the ongoing maintenance burden of managing dozens of custom pipelines."&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Drasi’s flexibility also extended beyond fleet tracking. By attaching an additional data source and defining new Continuous Queries, the same Drasi instance now surfaces changes in customer billing status and the legal contracts. This work required no new infrastructure, just connecting the source and writing queries (&lt;A href="https://drasi.io/reference/query-language/drasi-custom-functions/#drasi-delta-functions" target="_blank"&gt;leveraging Drasi’s custom Delta functions&lt;/A&gt;), providing business teams with up-to-date information without a separate integration effort.&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;Measurable Impact&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;Netstar reports tangible improvements across engineering operations and real-time responsiveness:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Faster incident response&lt;/STRONG&gt;: Missing health signals now trigger alerts immediately rather than being discovered later through manual checks, improving the speed and reliability of operational response.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Improved logistics coordination&lt;/STRONG&gt;: Real time visibility into container movement through waystations and toward port terminals has enabled Netstar and partners like Maersk to coordinate shipments more efficiently, with automated alerts keeping all stakeholders informed as conditions change.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Reduced development overhead&lt;/STRONG&gt;: Using Drasi has reduced the amount of custom development previously needed to support fleet monitoring capabilities. The same Drasi-driven architecture now supports multiple business cases, from tracking and health monitoring to route optimization.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Streamlined operator experience&lt;/STRONG&gt;: Teams moved from several monitoring tools to a single Drasi-powered Grafana interface, simplifying daily operations and eliminating time spent reconciling conflicting data from different systems.&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;Industry Context and What’s Next&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;Demand for real-time supply chain visibility has intensified as global logistics disruptions highlight the risks of delayed reporting.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;"Our customers don't just want historical reports anymore. They need to know what's happening right now and be alerted the moment something changes,"&lt;/EM&gt; Daniel Joubert explains. &lt;EM&gt;"That shift from batch processing to continuous monitoring is becoming table stakes in fleet management."&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Building on this foundation, Netstar is now investigating how Drasi can support predictive maintenance- spotting patterns in vehicle health data early enough to prevent failures altogether. The same change-driven architecture could also streamline coordination across broader supply chain workflows.&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;The Broader Architectural Shift&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;Netstar’s implementation reflects a wider architectural move emerging across operational solutions: from systems that store and query data to platforms that detect and react to changes as they happen. In fleet logistics, financial systems, and industrial operations, the competitive advantage increasingly lies in eliminating the lag between event and response.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;"Building custom integrations for every use case was slowing us down and limiting what we could deliver to customers," &lt;/EM&gt;Dustyn Lightfoot reflects&lt;EM&gt;. "Drasi gave us a reusable foundation that handles the hard parts, integrating disparate data sources and detecting meaningful changes, so we can focus on solving business problems rather than rebuilding infrastructure."&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;The collaboration between Drasi and Netstar demonstrates how open source change-driven platforms can simplify complex operational challenges whilst providing actionable insights across distributed systems. As logistics operations evolve, architectures like Drasi’s may define the next era of competitive advantage- one where actionable insight arrives the moment conditions change.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To learn more about Drasi visit &lt;A class="lia-external-url" href="https://drasi.io/" target="_blank"&gt;Drasi.io&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Thu, 05 Mar 2026 19:23:46 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/how-netstar-streamlined-fleet-monitoring-and-reduced-custom/ba-p/4499592</guid>
      <dc:creator>CollinBrian</dc:creator>
      <dc:date>2026-03-05T19:23:46Z</dc:date>
    </item>
    <item>
      <title>Retina 1.0 Is Now Available</title>
      <link>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/retina-1-0-is-now-available/ba-p/4489003</link>
      <description>&lt;P&gt;We are excited to announce the first major release of &lt;A class="lia-external-url" href="https://retina.sh/" target="_blank" rel="noopener"&gt;Retina&lt;/A&gt; - a significant milestone for the project. This version brings along many new features, enhancements and bug fixes.&lt;/P&gt;
&lt;P&gt;The Retina maintainer team would like to thank all contributors, community members, and early adopters who helped make this 1.0 release possible.&lt;/P&gt;
&lt;H1&gt;What is Retina?&lt;/H1&gt;
&lt;P&gt;Retina is an open-source, Kubernetes network observability platform. It enables you to continuously observe and measure network health, and investigate network issues on-demand with integrated Kubernetes-native workflows.&lt;/P&gt;
&lt;H1&gt;Why Retina?&lt;/H1&gt;
&lt;P&gt;Kubernetes networking failures are rarely isolated or easy to reproduce. Pods are ephemeral, services span multiple nodes, and network traffic crosses multiple layers (CNI, kube-proxy, node networking, policies), making crucial evidence difficult to capture. Manually connecting to nodes and stitching together logs or packet captures simply does not scale as clusters grow in size and complexity.&lt;/P&gt;
&lt;P&gt;A modern approach to observability must automate and centralize data collection while exposing rich, actionable insights.&lt;/P&gt;
&lt;P&gt;Retina represents a major step forward in solving the complexities of Kubernetes observability by leveraging the power of&amp;nbsp;&lt;A class="lia-external-url" href="https://ebpf.io/" target="_blank" rel="noopener"&gt;eBPF&lt;/A&gt;. Its cloud-agnostic design, deep integration with &lt;A class="lia-external-url" href="https://github.com/cilium/hubble?tab=readme-ov-file#what-is-hubble" target="_blank" rel="noopener"&gt;Hubble&lt;/A&gt;, and support for both real-time metrics and on-demand packet captures make it an invaluable tool for DevOps, SecOps, and compliance teams across diverse environments.&lt;/P&gt;
&lt;H1&gt;What Does It Do?&lt;/H1&gt;
&lt;P&gt;Retina can collect two types of telemetry: metrics and packet captures.&lt;/P&gt;
&lt;P&gt;The Retina shell enables ad-hoc troubleshooting via pre-installed networking tools.&lt;/P&gt;
&lt;H2&gt;Metrics&lt;/H2&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://retina.sh/docs/Metrics/metrics-intro" target="_blank" rel="noopener"&gt;Metrics&lt;/A&gt;&amp;nbsp;provide continuous observability. They can be exported to multiple storage options such as Prometheus or Azure Monitor, and visualized in a variety of ways, including Grafana or Azure Log Analytics.&lt;/P&gt;
&lt;P&gt;Retina supports two control planes: Hubble and Standard. Both are supported regardless of the underlying CNI. The choice of control plane affects the metrics which are collected.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://retina.sh/docs/Metrics/hubble_metrics" target="_blank" rel="noopener"&gt;Hubble metrics&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://retina.sh/docs/Metrics/modes/" target="_blank" rel="noopener"&gt;Standard metrics&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;You can customize which metrics are collected by enabling/disabling their corresponding &lt;A class="lia-external-url" href="https://retina.sh/docs/Metrics/plugins/" target="_blank" rel="noopener"&gt;plugins&lt;/A&gt;. Some examples of metrics may include:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Incoming/outcoming traffic&lt;/LI&gt;
&lt;LI&gt;Dropped packets&lt;/LI&gt;
&lt;LI&gt;TCP/UDP&lt;/LI&gt;
&lt;LI&gt;DNS&lt;/LI&gt;
&lt;LI&gt;API Server latency&lt;/LI&gt;
&lt;LI&gt;Node/interface statistics&lt;/LI&gt;
&lt;/UL&gt;
&lt;img&gt;Grafana dashboard visualizing metrics from Retina - showing packets being dropped on the cluster.&lt;/img&gt;
&lt;H2&gt;Packet Captures&lt;/H2&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://retina.sh/docs/Captures/overview" target="_blank" rel="noopener"&gt;Captures&lt;/A&gt; provide on-demand observability. They allow users to perform distributed packet captures across the cluster, based on specified Nodes/Pods and other supported filters. They can be triggered&amp;nbsp;&lt;A class="lia-external-url" href="https://retina.sh/docs/Captures/cli" target="_blank" rel="noopener"&gt;via the CLI&lt;/A&gt; or &lt;A class="lia-external-url" href="https://retina.sh/docs/Captures/crd" target="_blank" rel="noopener"&gt;through the capture CRD&lt;/A&gt;, and may be output to persistent storage options such as the host filesystem, a PVC, or a storage blob.&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The &lt;A class="lia-external-url" href="https://retina.sh/docs/Captures/cli#file-and-directory-structure-inside-the-tarball" target="_blank" rel="noopener"&gt;result of the capture&lt;/A&gt;&amp;nbsp;&lt;/SPAN&gt;contains more than just a &lt;EM&gt;.pcap&lt;/EM&gt; file&lt;SPAN data-contrast="auto"&gt;. Retina also captures a number of networking metadata such as iptables rules, socket statistics, kernel network information from&amp;nbsp;&lt;EM&gt;/proc/net&lt;/EM&gt;, and more.&lt;/SPAN&gt;&lt;/P&gt;
&lt;img&gt;Retina packet capture performed through the CLI.&lt;/img&gt;
&lt;H2&gt;Shell&lt;/H2&gt;
&lt;P&gt;The &lt;A class="lia-external-url" href="https://retina.sh/docs/Troubleshooting/shell" target="_blank" rel="noopener"&gt;Retina shell&lt;/A&gt; enables deep ad-hoc troubleshooting by providing a suite of networking tools. The CLI command starts an interactive shell on a Kubernetes node that runs a container image which includes standard tools such as ping or curl, as well as specialized tools like&amp;nbsp;&lt;A class="lia-external-url" href="https://retina.sh/docs/Troubleshooting/shell#bpftool" target="_blank" rel="noopener"&gt;bpftool&lt;/A&gt;, &lt;A class="lia-external-url" href="https://retina.sh/docs/Troubleshooting/shell#pwru" target="_blank" rel="noopener"&gt;pwru&lt;/A&gt;, &lt;A class="lia-external-url" href="https://retina.sh/docs/Troubleshooting/shell#inspektor-gadget-ig" target="_blank" rel="noopener"&gt;Inspektor Gadget&lt;/A&gt; and more.&lt;/P&gt;
&lt;P&gt;The Retina shell is currently only available on Linux. Note that some tools require particular capabilities to execute. These can be passed as&amp;nbsp;&lt;A class="lia-external-url" href="https://retina.sh/docs/Troubleshooting/shell#getting-started" target="_blank" rel="noopener"&gt;parameters through the CLI&lt;/A&gt;.&lt;/P&gt;
&lt;img&gt;Retina shell CLI - showcasing some of the available tools, including ping, dig, bpftool and pwru.&lt;/img&gt;
&lt;H2&gt;Use Cases&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Debugging Pod Connectivity Issues&lt;/STRONG&gt;: When services can’t communicate, Retina enables rapid, automated distributed packet capture and drop metrics, drastically reducing troubleshooting time. The Retina shell also brings specialized tools for deep manual investigations.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Continuous Monitoring of Network Health&lt;/STRONG&gt;: Operators can set up alerts and dashboards for DNS failures, API server latency, or packet drops, gaining ongoing visibility into cluster networking.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Security Auditing and Compliance&lt;/STRONG&gt;: Flow logs (in Hubble mode) and metrics support security investigations and compliance reporting, enabling quick identification of unexpected connections or data transfers.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Multi-Cluster / Multi-Cloud Visibility&lt;/STRONG&gt;: Retina standardizes network observability across clouds, supporting unified dashboards and processes for SRE teams.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H1&gt;Where Does It Run?&lt;/H1&gt;
&lt;P&gt;Retina is designed for broad compatibility across Kubernetes distributions, cloud providers, and operating systems. There are no Azure-specific dependencies - Retina runs anywhere Kubernetes does.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Operating Systems&lt;/STRONG&gt;: Both Linux and Windows nodes are supported.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;Kubernetes Distributions&lt;/STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;: Retina is distribution-agnostic, deployable on managed services (AKS, EKS, GKE) or self-managed clusters.&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;CNI / Network Stack&lt;/STRONG&gt;: Retina works with any CNI, focusing on kernel-level events rather than CNI-specific logs.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Cloud Integration&lt;/STRONG&gt;: Retina exports metrics to Azure Monitor and Log Analytics, with pre-built Grafana dashboards for AKS. Integration with AWS CloudWatch or GCP Stackdriver is possible via Prometheus.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Observability Stacks&lt;/STRONG&gt;: Retina integrates with Prometheus &amp;amp; Grafana, Cilium Hubble (for flow logs and UI), and can be extended to other exporters.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img&gt;High-level overview of where Retina runs.&lt;/img&gt;
&lt;H1 class="lia-clear-both"&gt;Design Overview&lt;/H1&gt;
&lt;P&gt;Retina’s architecture consists of two layers: a data collection layer in the kernel-space, and processing layer that converts low-level signals into Kubernetes-aware telemetry in the user-space.&lt;/P&gt;
&lt;P&gt;When Retina is installed, each node in the cluster runs a Retina agent which collects raw network telemetry from the host kernel - backed by eBPF on Linux, and HNS/VFP on Windows. The agent &lt;SPAN style="color: rgb(30, 30, 30);"&gt;processes the raw network data and enriches it with Kubernetes metadata, which is then exported&lt;/SPAN&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt; for consumption by monitoring tools such as Prometheus, Grafana, or Hubble UI.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;Modularity and extensibility are central to the design philosophy. Retina's plugin model lets you enable only the telemetry you need, and add new sources by implementing a common plugin interface.&amp;nbsp;Built-in plugins include Drop Reason, DNS, Packet Forward, and more.&lt;/P&gt;
&lt;P&gt;Check out our &lt;A class="lia-external-url" href="https://retina.sh/docs/Introduction/architecture" target="_blank" rel="noopener"&gt;architecture docs&lt;/A&gt; for a deeper dive into Retina's design.&lt;/P&gt;
&lt;H1&gt;Get Started&lt;/H1&gt;
&lt;P&gt;Thanks to &lt;A class="lia-external-url" href="https://helm.sh/" target="_blank" rel="noopener"&gt;Helm charts&lt;/A&gt; deploying Retina is streamlined across all environments, and can be done with one configurable command. For complete documentation, visit our &lt;A href="https://retina.sh/docs/Installation/Setup" target="_blank" rel="noopener"&gt;installation docs&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To &lt;A class="lia-external-url" href="https://retina.sh/docs/Installation/Setup" target="_blank" rel="noopener"&gt;install Retina&lt;/A&gt; with the Standard control plane and Basic metrics mode:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;VERSION=$( curl -sL https://api.github.com/repos/microsoft/retina/releases/latest | jq -r .name)
helm upgrade --install retina oci://ghcr.io/microsoft/retina/charts/retina \
    --version $VERSION \
    --namespace kube-system \
    --set image.tag=$VERSION \
    --set operator.tag=$VERSION \
    --set logLevel=info \
    --set operator.enabled=true \
    --set enabledPlugin_linux="\[dropreason\,packetforward\,linuxutil\,dns\]"&lt;/LI-CODE&gt;
&lt;P&gt;Once Retina is running in your cluster, you can then configure &lt;A class="lia-external-url" href="https://retina.sh/docs/Installation/prometheus" target="_blank" rel="noopener"&gt;Prometheus&lt;/A&gt;&amp;nbsp;and &lt;A class="lia-external-url" href="https://retina.sh/docs/Installation/grafana" target="_blank" rel="noopener"&gt;Grafana&lt;/A&gt; to scrape and visualize your metrics.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://retina.sh/docs/Installation/CLI" target="_blank" rel="noopener"&gt;Install the Retina CLI&lt;/A&gt; with &lt;A class="lia-external-url" href="https://krew.sigs.k8s.io/" target="_blank"&gt;Krew&lt;/A&gt;:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;kubectl krew install retina&lt;/LI-CODE&gt;
&lt;H1&gt;Get Involved&lt;/H1&gt;
&lt;P&gt;Retina is open-source under the &lt;A class="lia-external-url" href="https://retina.sh/docs/Contributing/overview#licensing" target="_blank" rel="noopener"&gt;MIT License&lt;/A&gt; and welcomes community contributions. Since its announcement in early 2024, the project has gained significant traction, with contributors from multiple organizations helping to expand its capabilities.&lt;/P&gt;
&lt;P&gt;The project is hosted on &lt;A class="lia-external-url" href="https://github.com/microsoft/retina" target="_blank" rel="noopener"&gt;GitHub · microsoft/retina&lt;/A&gt; and documentation is available at &lt;A href="https://retina.sh" target="_blank" rel="noopener"&gt;retina.sh&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;If you would like to contribute to Retina you can follow our &lt;A class="lia-external-url" href="https://retina.sh/docs/Contributing/overview" target="_blank" rel="noopener"&gt;contributor guide&lt;/A&gt;.&lt;/P&gt;
&lt;H1&gt;What's Next?&lt;/H1&gt;
&lt;P&gt;Retina 1.1 of course!&lt;/P&gt;
&lt;P&gt;We are also discussing the future roadmap, and exploring the possibility of moving the project to community ownership. Stay tuned!&lt;/P&gt;
&lt;P&gt;In the meantime, we welcome you to &lt;A class="lia-external-url" href="https://github.com/microsoft/retina/issues" target="_blank" rel="noopener"&gt;raise an issue&lt;/A&gt; if you find any bugs, or start a &lt;A class="lia-external-url" href="https://github.com/microsoft/retina/discussions" target="_blank" rel="noopener"&gt;discussion&lt;/A&gt; if you have any questions or suggestions.&lt;/P&gt;
&lt;P&gt;You can also &lt;A class="lia-external-url" href="mailto:retina@microsoft.com" target="_blank" rel="noopener"&gt;reach out to the Retina team via email&lt;/A&gt;, we would love to hear from you!&lt;/P&gt;
&lt;H1&gt;References&lt;/H1&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-ccp-props="{}"&gt;&lt;EM&gt;&lt;A href="https://retina.sh/" target="_blank" rel="noopener"&gt;Retina&lt;/A&gt;&lt;/EM&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-ccp-props="{}"&gt;&lt;EM&gt;&lt;A href="https://www.srodi.com/posts/kubernetes-ebpf-observability-retina-deepdive/" target="_blank" rel="noopener"&gt;Deep Dive into Retina Open-Source Kubernetes Network Observability&lt;/A&gt;&lt;/EM&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-ccp-props="{}"&gt;&lt;EM&gt;&lt;A href="https://techcommunity.microsoft.com/blog/linuxandopensourceblog/troubleshooting-network-issues-with-retina/4446071" target="_blank" rel="noopener"&gt;Troubleshooting Network Issues with Retina&lt;/A&gt;&lt;/EM&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-ccp-props="{}"&gt;&lt;EM&gt;&lt;A href="https://techcommunity.microsoft.com/blog/linuxandopensourceblog/ebpf-powered-observability-beyond-azure-a-multi-cloud-perspective-with-retina/4403361" target="_blank" rel="noopener"&gt;Retina: Bridging Kubernetes Observability and eBPF Across the Clouds&lt;/A&gt;&lt;/EM&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 03 Feb 2026 14:36:06 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/retina-1-0-is-now-available/ba-p/4489003</guid>
      <dc:creator>kamilp</dc:creator>
      <dc:date>2026-02-03T14:36:06Z</dc:date>
    </item>
    <item>
      <title>Scaling DNS on AKS with Cilium: NodeLocal DNSCache, LRP, and FQDN Policies</title>
      <link>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/scaling-dns-on-aks-with-cilium-nodelocal-dnscache-lrp-and-fqdn/ba-p/4486323</link>
      <description>&lt;H2&gt;Why Adopt NodeLocal DNSCache?&lt;/H2&gt;
&lt;P&gt;The primary drivers for adoption are usually:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Eliminating Conntrack Pressure:&lt;/STRONG&gt; In high-QPS UDP DNS scenarios, conntrack contention and UDP tracking can cause intermittent DNS response loss and retries; depending on resolver retry/timeouts, this can appear as multi-second lookup delays and sometimes much longer tails.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Reducing Latency&lt;/STRONG&gt;: By placing a cache on every node, you remove the network hop to the CoreDNS service. Responses are practically instantaneous for cached records.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Offloading CoreDNS&lt;/STRONG&gt;: A DaemonSet architecture effectively shards the DNS query load across the entire cluster, preventing the central CoreDNS deployment from becoming a single point of congestion during bursty scaling events.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H4&gt;Who needs this?&lt;/H4&gt;
&lt;P&gt;You should prioritize this architecture if you run:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Large-scale clusters&lt;/STRONG&gt; large clusters (hundreds of nodes or thousands of pods), where CoreDNS scaling becomes difficult to manage.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;High-churn endpoints&lt;/STRONG&gt;, such as spot instances or frequent auto-scaling jobs that trigger massive waves of DNS queries.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Real-time applications&lt;/STRONG&gt; where multi-second (and occasionally longer) DNS lookup delays are unacceptable.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;The Challenge with Cilium&lt;/H2&gt;
&lt;P&gt;Deploying NodeLocal DNSCache on a cluster managed by &lt;STRONG&gt;Cilium&lt;/STRONG&gt; (CNI) requires a specific approach. Standard NodeLocal DNSCache relies on node-level &lt;EM&gt;interface&lt;/EM&gt;/&lt;EM&gt;iptables &lt;/EM&gt;setup. In Cilium environments, you can instead implement the interception via &lt;STRONG&gt;Cilium Local Redirect Policy (LRP)&lt;/STRONG&gt;, which redirects traffic destined to the &lt;EM&gt;kube-dns&lt;/EM&gt; ClusterIP service to a node-local backend pod.&lt;/P&gt;
&lt;P&gt;This post details a production-ready deployment strategy aligned with Cilium’s Local Redirect Policy model. It covers necessary configuration tweaks to avoid conflicts and explains how to maintain security filtering.&lt;/P&gt;
&lt;H2&gt;Architecture Overview&lt;/H2&gt;
&lt;P&gt;In a standard Kubernetes deployment, NodeLocal DNSCache creates a dummy network interface and uses extensive iptables rules to hijack traffic destined for the Cluster DNS IP.&lt;/P&gt;
&lt;P&gt;When using Cilium, we can achieve this more elegantly and efficiently using &lt;STRONG&gt;Local Redirect Policies&lt;/STRONG&gt;.&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;DaemonSet&lt;/STRONG&gt;: Runs &lt;EM&gt;node-local-dns&lt;/EM&gt; on every node.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Configuration&lt;/STRONG&gt;: Configured to &lt;U&gt;skip&lt;/U&gt; interface creation and iptables manipulation.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Redirection&lt;/STRONG&gt;: Cilium LRP intercepts traffic to the &lt;EM&gt;kube-dns&lt;/EM&gt; Service IP and redirects it to the local pod on the same node.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H3&gt;1. The NodeLocal DNSCache DaemonSet&lt;/H3&gt;
&lt;P&gt;The critical difference in this manifest is the arguments passed to the&amp;nbsp;&lt;EM&gt;node-local-dns&lt;/EM&gt; binary. We must explicitly disable its networking setup functions to let Cilium handle the traffic.&lt;/P&gt;
&lt;P&gt;The NodeLocal DNSCache deployment also requires the &lt;EM&gt;node-local-dns ConfigMap&lt;/EM&gt; and the &lt;EM&gt;kube-dns-upstream Service&lt;/EM&gt; (plus &lt;EM&gt;RBAC/ServiceAccount&lt;/EM&gt;). For brevity, the snippet below shows only the DaemonSet arguments that differ in the Cilium/LRP approach. The &lt;EM&gt;node-cache&lt;/EM&gt; reads the template &lt;EM&gt;Corefile &lt;/EM&gt;(&lt;EM&gt;/etc/coredns/Corefile.base&lt;/EM&gt;) and generates the active &lt;EM&gt;Corefile &lt;/EM&gt;(&lt;EM&gt;/etc/Corefile&lt;/EM&gt;). The &lt;EM&gt;-conf&lt;/EM&gt; flag points CoreDNS at the active &lt;EM&gt;Corefile &lt;/EM&gt;it should load.&lt;/P&gt;
&lt;P&gt;The node-cache binary accepts &lt;EM&gt;-localip&lt;/EM&gt; as an IP list; &lt;EM&gt;0.0.0.0&lt;/EM&gt; is a valid value and makes it listen on all interfaces, appropriate for the LRP-based redirection model.&lt;/P&gt;
&lt;LI-CODE lang="yaml"&gt;apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-local-dns
  namespace: kube-system
  labels:
    k8s-app: node-local-dns
spec:
  selector:
    matchLabels:
      k8s-app: node-local-dns
  template:
    metadata:
      labels:
        k8s-app: node-local-dns
      annotations:
        # Optional: policy.cilium.io/no-track-port can be used to bypass conntrack for DNS.
        # Validate the impact on your Cilium version and your observability/troubleshooting needs.
        policy.cilium.io/no-track-port: "53"
    spec:
      # IMPORTANT for the "LRP + listen broadly" approach:
      # keep hostNetwork off so you don't hijack node-wide :53
      hostNetwork: false
      # Don't use cluster DNS
      dnsPolicy: Default
      containers:
      - name: node-cache
        image: registry.k8s.io/dns/k8s-dns-node-cache:1.15.16
        args: 
        - "-localip"
        # Use a bind-all approach. Ensure server blocks bind broadly in your Corefile.
        - "0.0.0.0" 
        - "-conf"
        - "/etc/Corefile"
        - "-upstreamsvc"
        - "kube-dns-upstream"
        # CRITICAL: Disable internal setup
        - "-skipteardown=true"
        - "-setupinterface=false"
        - "-setupiptables=false"
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        # Ensure your Corefile includes health :8080 so the liveness probe works
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 60
          timeoutSeconds: 5
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
        - name: kube-dns-config
          mountPath: /etc/kube-dns
      volumes:
      - name: kube-dns-config
        configMap:
          name: kube-dns
          optional: true
      - name: config-volume
        configMap:
          name: node-local-dns
          items:
            - key: Corefile
              path: Corefile.base&lt;/LI-CODE&gt;
&lt;H3&gt;2. The Cilium Local Redirect Policy (LRP)&lt;/H3&gt;
&lt;P&gt;Instead of iptables, we define a CRD that tells Cilium: "When you see traffic for `kube-dns`, send it to the `node-local-dns` pod on this same node."&lt;/P&gt;
&lt;LI-CODE lang="yaml"&gt;apiVersion: "cilium.io/v2"
kind: CiliumLocalRedirectPolicy
metadata:
  name: "nodelocaldns"
  namespace: kube-system
spec:
  redirectFrontend:
    # ServiceMatcher mode is for ClusterIP services
    serviceMatcher:
      serviceName: kube-dns
      namespace: kube-system
  redirectBackend:
    # The backend pods selected by localEndpointSelector must be in the same namespace as the LRP
    localEndpointSelector:
      matchLabels:
        k8s-app: node-local-dns
    toPorts:
      - port: "53"
        name: dns
        protocol: UDP
      - port: "53"
        name: dns-tcp
        protocol: TCP&lt;/LI-CODE&gt;
&lt;P&gt;This is an&amp;nbsp;&lt;STRONG&gt;LRP-based NodeLocal DNSCache deployment&lt;/STRONG&gt;: we disable node-cache’s &lt;EM&gt;iptables&lt;/EM&gt;/&lt;EM&gt;interface &lt;/EM&gt;setup and let &lt;STRONG&gt;Cilium LRP&lt;/STRONG&gt; handle local redirection. This differs from the upstream NodeLocal DNSCache manifest, which uses &lt;EM&gt;hostNetwork &lt;/EM&gt;+ &lt;EM&gt;dummy interface&lt;/EM&gt; + &lt;EM&gt;iptables&lt;/EM&gt;.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;LRP must be enabled in Cilium (e.g., &lt;EM&gt;localRedirectPolicies.enabled=true&lt;/EM&gt;) before applying the CRD. &lt;A class="lia-external-url" href="https://docs.cilium.io/en/stable/network/kubernetes/local-redirect-policy/#prerequisites" target="_blank" rel="noopener"&gt;Official Cilium LRP doc&lt;/A&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H2 data-start="80" data-end="122"&gt;DNS-Based FQDN Policy Enforcement Flow&lt;/H2&gt;
&lt;P data-start="124" data-end="702"&gt;The diagram below illustrates how Cilium enforces FQDN-based egress policies using DNS observation and datapath programming. During the DNS resolution phase, queries are redirected to NodeLocal DNS (or CoreDNS), where responses are observed and used to populate Cilium’s FQDN-to-IP cache. Cilium then programs these mappings into eBPF maps in the datapath. In the connection phase, when the client initiates an HTTPS connection to the resolved IP, the datapath checks the IP against the learned FQDN map and applies the policy decision before allowing or denying the connection.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;img&gt;End-to-end flow of DNS resolution, FQDN learning, and eBPF-based policy enforcement in Cilium.&lt;/img&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;The Network Policy "Gotcha"&lt;/H2&gt;
&lt;P&gt;If you use &lt;STRONG&gt;CiliumNetworkPolicy &lt;/STRONG&gt;to restrict egress traffic, specifically for &lt;STRONG&gt;FQDN filtering, &lt;/STRONG&gt;you typically allow access to CoreDNS like this:&lt;/P&gt;
&lt;LI-CODE lang="yaml"&gt;  - toEndpoints:
    - matchLabels:
        k8s:io.kubernetes.pod.namespace: kube-system
        k8s:k8s-app: kube-dns
    toPorts:
    - ports:
      - port: "53"
        protocol: ANY&lt;/LI-CODE&gt;
&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;This will break with local redirection.&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;Why? Because LRP redirects the DNS request to the&amp;nbsp;&lt;STRONG&gt;node-local-dns backend endpoint&lt;/STRONG&gt;; strict egress policies must therefore allow both &lt;EM&gt;kube-dns&lt;/EM&gt; (upstream) &lt;STRONG&gt;and&lt;/STRONG&gt; &lt;EM&gt;node-local-dns&lt;/EM&gt; (the redirected destination).&lt;/P&gt;
&lt;H3&gt;The Repro Setup&lt;/H3&gt;
&lt;P&gt;To demonstrate this failure, the cluster is configured with:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;NodeLocal DNSCache&lt;/STRONG&gt;: Deployed as a DaemonSet (&lt;EM&gt;node-local-dns&lt;/EM&gt;) to cache DNS requests locally on every node.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Local Redirect Policy (LRP)&lt;/STRONG&gt;: An active LRP intercepts traffic destined for the &lt;EM&gt;kube-dns&lt;/EM&gt; Service IP and redirects it to the local &lt;EM&gt;node-local-dns&lt;/EM&gt; pod.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Incomplete Network Policy&lt;/STRONG&gt;: A strict &lt;EM&gt;CiliumNetworkPolicy &lt;/EM&gt;(CNP) is enforced on the client pod. While it explicitly allows egress to &lt;EM&gt;kube-dns&lt;/EM&gt;, it &lt;STRONG&gt;misses &lt;/STRONG&gt;the corresponding rule for &lt;EM&gt;node-local-dns&lt;/EM&gt;.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H4&gt;Reveal the issue using Hubble:&lt;/H4&gt;
&lt;P&gt;In this scenario, the client pod &lt;EM&gt;dns-client&lt;/EM&gt; is attempting to resolve the external domain &lt;EM&gt;github.com&lt;/EM&gt;.&lt;/P&gt;
&lt;P&gt;When inspecting the traffic flows, you will see &lt;EM&gt;EGRESS DENIED&lt;/EM&gt; verdicts. Crucially, notice the destination pod in the logs below:&lt;EM&gt; kube-system/node-local-dns&lt;/EM&gt;, not &lt;EM&gt;kube-dns&lt;/EM&gt;.&lt;/P&gt;
&lt;P&gt;Although the application originally sent the packet to the Cluster IP of CoreDNS, Cilium's Local Redirect Policy modified the destination to the local node cache. Since strictly defined Network Policies assume traffic is going to the &lt;EM&gt;kube-dns&lt;/EM&gt; identity, this redirected traffic falls outside the allowed rules and is dropped by the default deny stance.&lt;/P&gt;
&lt;img /&gt;
&lt;H3&gt;The Fix: You must allow egress to &lt;U&gt;both&lt;/U&gt; labels.&lt;/H3&gt;
&lt;LI-CODE lang="yaml"&gt;  - toEndpoints:
    - matchLabels:
        k8s:io.kubernetes.pod.namespace: kube-system
        k8s:k8s-app: kube-dns
    # Add this selector for the local cache
    - matchLabels:
        k8s:io.kubernetes.pod.namespace: kube-system
        k8s:k8s-app: node-local-dns 
    toPorts:
    - ports:
      - port: "53"
        protocol: ANY&lt;/LI-CODE&gt;
&lt;P&gt;Without this addition, pods protected by strict egress policies will timeout resolving DNS, even though the cache is running.&lt;/P&gt;
&lt;H4&gt;Use Hubble to observe the network flows:&lt;/H4&gt;
&lt;P&gt;After adding &lt;EM&gt;matchLabels: k8s:k8s-app: node-local-dns&lt;/EM&gt;, the traffic is now allowed. Hubble confirms a policy verdict of &lt;EM&gt;EGRESS ALLOWED&lt;/EM&gt; for UDP traffic on port 53. Because DNS resolution now succeeds, the response populates the Cilium FQDN cache, subsequently allowing the TCP traffic to &lt;EM&gt;github.com&lt;/EM&gt; on port 443 as intended.&lt;/P&gt;
&lt;img /&gt;
&lt;H3&gt;Real-World Example: Restricting Egress with FQDN Policies&lt;/H3&gt;
&lt;P&gt;Here is a complete &lt;EM&gt;CiliumNetworkPolicy&lt;/EM&gt; that locks down a workload to only &lt;EM&gt;access api.example.com.&lt;/EM&gt; Note how the DNS rule explicitly allows traffic to both &lt;EM&gt;kube-dns &lt;/EM&gt;(for upstream) and &lt;EM&gt;node-local-dns&lt;/EM&gt; (for the local cache).&lt;/P&gt;
&lt;LI-CODE lang="yaml"&gt;apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: secure-workload-policy
spec:
  endpointSelector:
    matchLabels:
      app: critical-workload
  egress:
  # 1. Allow DNS Resolution (REQUIRED for FQDN policies)
  - toEndpoints:
    - matchLabels:
        k8s:io.kubernetes.pod.namespace: kube-system
        k8s:k8s-app: kube-dns
    # Allow traffic to the local cache redirection target
    - matchLabels:
        k8s:io.kubernetes.pod.namespace: kube-system
        k8s:k8s-app: node-local-dns
    toPorts:
    - ports:
      - port: "53"
        protocol: ANY
      rules:
        dns:
        - matchPattern: "*"

  # 2. Allow specific FQDN traffic (populated via DNS lookups)
  - toFQDNs:
    - matchName: "api.example.com"
    toPorts:
    - ports:
      - port: "443"
        protocol: TCP&lt;/LI-CODE&gt;
&lt;H2&gt;Configuration &amp;amp; Upstream Loops&lt;/H2&gt;
&lt;P&gt;When configuring the &lt;EM&gt;ConfigMap &lt;/EM&gt;for &lt;EM&gt;node-local-dns&lt;/EM&gt;, use the standard placeholders provided by the image. The binary replaces them at runtime:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;__PILLAR__CLUSTER__DNS__: The Upstream Service IP (&lt;EM&gt;kube-dns-upstream&lt;/EM&gt;).&lt;/LI&gt;
&lt;LI&gt;__PILLAR__UPSTREAM__SERVERS__: The system resolvers (usually &lt;EM&gt;/etc/resolv.conf&lt;/EM&gt;).&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Ensure &lt;EM&gt;kube-dns-upstream&lt;/EM&gt; exists as a Service selecting the CoreDNS pods so cache misses are forwarded to the actual CoreDNS backends.&lt;/P&gt;
&lt;H2&gt;Alternative: AKS LocalDNS&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;LocalDNS &lt;/STRONG&gt;is an Azure Kubernetes Services (AKS)-managed node-local DNS proxy/cache.&lt;/P&gt;
&lt;H4&gt;Pros:&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;Managed lifecycle at the node pool level.&lt;/LI&gt;
&lt;LI&gt;Support for custom configuration via &lt;EM&gt;localdnsconfig.json&lt;/EM&gt; (e.g., custom server blocks, cache tuning).&lt;/LI&gt;
&lt;LI&gt;No manual DaemonSet management required.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4&gt;Cons &amp;amp; Limitations:&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Incompatibility with FQDN Policies&lt;/STRONG&gt;: As noted in the &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/aks/localdns-custom" target="_blank" rel="noopener"&gt;official documentation&lt;/A&gt;, LocalDNS isn’t compatible with applied FQDN filter policies in ACNS/Cilium; if you rely on FQDN enforcement, prefer a DNS path that preserves FQDN learning/enforcement.&lt;/LI&gt;
&lt;LI&gt;Updating configuration requires reimaging the node pool.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;For environments heavily relying on strict Cilium Network Policies and FQDN filtering, the manual deployment method described above (using LRP) can be more reliable and transparent.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;AKS recommends not enabling both upstream NodeLocal DNSCache and LocalDNS in the same node pool, as DNS traffic is routed through LocalDNS and results may be unexpected.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H2&gt;References&lt;/H2&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/" target="_blank" rel="noopener"&gt;Kubernetes Documentation: NodeLocal DNSCache&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://docs.cilium.io/en/stable/network/kubernetes/local-redirect-policy/" target="_blank" rel="noopener"&gt;Cilium Documentation: Local Redirect Policy&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/aks/localdns-custom" target="_blank" rel="noopener"&gt;AKS Documentation: Configure LocalDNS&lt;/A&gt;&lt;/LI&gt;
&lt;/OL&gt;</description>
      <pubDate>Tue, 10 Mar 2026 09:23:51 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/scaling-dns-on-aks-with-cilium-nodelocal-dnscache-lrp-and-fqdn/ba-p/4486323</guid>
      <dc:creator>Simone_Rodigari</dc:creator>
      <dc:date>2026-03-10T09:23:51Z</dc:date>
    </item>
    <item>
      <title>Event-Driven to Change-Driven: Low-cost dependency inversion</title>
      <link>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/event-driven-to-change-driven-low-cost-dependency-inversion/ba-p/4478948</link>
      <description>&lt;P&gt;Event-driven architectures tout scalability, loose coupling, and eventual consistency. The architectural patterns are sound, the theory is compelling, and the blog posts make it look straightforward.&lt;/P&gt;
&lt;P&gt;Then you implement it.&lt;/P&gt;
&lt;P&gt;Suddenly you're maintaining separate event stores, implementing transactional outboxes, debugging projection rebuilds, versioning events across a dozen micro-services, and writing mountains of boilerplate to handle what should be simple queries.&lt;/P&gt;
&lt;P&gt;Your domain events that were supposed to capture rich business meaning have devolved into glorified database change notifications. Downstream services diff field values to extract intent from "&lt;EM&gt;OrderUpdated&lt;/EM&gt;" events because developers just don't get what constitutes a proper domain event.&lt;/P&gt;
&lt;P&gt;The complexity tax is real, don't get me wrong, it's very elegant but for many systems it's unjustified.&lt;/P&gt;
&lt;P&gt;Drasi offers an alternative: &lt;EM&gt;change-driven architecture&lt;/EM&gt; that delivers reactive, real-time capabilities across multiple data sources without requiring you to rewrite your application or over complicate your architecture.&lt;/P&gt;
&lt;H3&gt;What do we mean by “Event-driven” architecture&lt;/H3&gt;
&lt;P&gt;As &lt;A href="https://www.youtube.com/watch?v=STKCRSUsyP0" data-test-app-aware-link="" target="_blank"&gt;Martin Fowler notes&lt;/A&gt;, event-driven architecture isn't a single pattern, it's at least four distinct patterns that are often confused, each with its own benefits and traps.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Event Notification&lt;/STRONG&gt; is the simplest form. Here, events act as signals that something has happened, but carry minimal data, often just an identifier. The recipient must query the source system for more details if needed. For example, a service emits an &lt;EM&gt;OrderPlaced &lt;/EM&gt;event with just the order ID. Downstream consumers must query the order service to retrieve full order details.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Event Carried State Transfer&lt;/STRONG&gt; broadcasts full state changes through events. When an order ships, you publish an &lt;EM&gt;OrderShipped&lt;/EM&gt; event containing all the order details. Downstream services maintain their own materialized views or projections by consuming these events.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Event Sourcing&lt;/STRONG&gt; goes further, events become your source of truth. Instead of storing current state, you store the sequence of events that led to that state. Your order isn't a row in a database; it's the sum of &lt;EM&gt;OrderPlaced&lt;/EM&gt;, &lt;EM&gt;ItemAdded&lt;/EM&gt;, &lt;EM&gt;PaymentProcessed&lt;/EM&gt;, and &lt;EM&gt;OrderShipped &lt;/EM&gt;events.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;CQRS (Command Query Responsibility Segregation)&lt;/STRONG&gt; separates write operations (commands) from read operations (queries). While not inherently event-driven, CQRS is often paired with event sourcing or event-carried state transfer to optimize for scalability and maintainability. Originally derived from Bertrand Meyer's Command-Query Separation principle and popularized by Greg Young, CQRS addresses a specific architectural challenge: the tension between optimizing for writes versus optimizing for reads.&lt;/P&gt;
&lt;P&gt;The pattern promises several benefits:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Optimized data models&lt;/STRONG&gt;: Your write model can focus on transactional consistency while read models optimize for query performance&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Scalability&lt;/STRONG&gt;: Read and write sides can scale independently&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Temporal queries&lt;/STRONG&gt;: With event sourcing, you get time travel for free—reconstruct state at any point in history&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Audit trail&lt;/STRONG&gt;: Every change is captured as an immutable event&lt;/P&gt;
&lt;P&gt;While CQRS isn't inherently tied to Domain-Driven Design (DDD), the pattern complements DDD well. In DDD contexts, CQRS enables different bounded contexts to maintain their own read models tailored to their specific ubiquitous language, while the write model protects domain invariants. This is why you'll often see them discussed together, though each can be applied independently.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;The core motivation for these patterns is often to invert the dependency between systems, so that your downstream services do not need to know about your upstream services.&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;H3&gt;The Developer's Struggle: When Domain Events Become Database Events&lt;/H3&gt;
&lt;P&gt;Chris Kiehl puts it bluntly in his article "&lt;A href="https://chriskiehl.com/article/event-sourcing-is-hard" data-test-app-aware-link="" target="_blank"&gt;Don't Let the Internet Dupe You, Event Sourcing is Hard&lt;/A&gt;": &lt;EM&gt;"The sheer volume of plumbing code involved is staggering—instead of a friendly N-tier setup, you now have classes for commands, command handlers, command validators, events, aggregates, and then projections, model classes, access classes, custom materialization code, and so on."&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;But the real tragedy isn't the boilerplate, it's what happens to those carefully crafted domain events. As developers are disconnected from the real-world business, they struggle to understand the nuances of domain events, a dangerous pattern emerges. Instead of modeling meaningful business processes, teams default to what they know: CRUD.&lt;/P&gt;
&lt;P&gt;Your event stream starts looking like this:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;OrderCreated&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;OrderUpdated&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;OrderUpdated&lt;/EM&gt;&lt;/STRONG&gt; (again)&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;OrderUpdated&lt;/EM&gt;&lt;/STRONG&gt; (wait, what changed?)&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;OrderDeleted&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;As one developer &lt;A href="https://www.linkedin.com/pulse/anti-patterns-event-driven-architecture-arpit-jain" data-test-app-aware-link="" target="_blank"&gt;noted on LinkedIn&lt;/A&gt;, these "CRUD events" are really just &lt;EM&gt;"leaky events that lack clarity and should not be used to replicate databases as this leaks implementation details and couples services to a shared data model."&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Dennis Doomen, reflecting on &lt;A href="https://www.dennisdoomen.com/2017/11/the-ugly-of-event-sourcingreal-world.html" data-test-app-aware-link="" target="_blank"&gt;real-world production issues&lt;/A&gt;, observes: &lt;EM&gt;"It's only once you have a living, breathing machine, users which depend on you, consumers which you can't break, and all the other real-world complexities that plague software projects that the hard problems in event sourcing will rear their heads."&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;The result? Your elegant event-driven architecture devolves into an expensive, brittle form of self-maintained Change Data Capture (CDC). You're not modeling business processes; you're just broadcasting database mutations with extra steps.&lt;/P&gt;
&lt;H3&gt;The Anti-Corruption Layer: Your Defense Against the Outside World&lt;/H3&gt;
&lt;P&gt;In DDD, an Anti-Corruption Layer (ACL) protects your bounded context from external models that would corrupt your domain. Think of it as a translator that speaks both languages, the messy external model and your clean internal model.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;The ACL ensures that changes to the external system don't ripple through your domain. If the legacy system changes its schema, you update the translator, not your entire domain model.&lt;/P&gt;
&lt;H3&gt;When Event Taxonomies Become Your ACL (And Why They Fail)&lt;/H3&gt;
&lt;P&gt;In most event-driven architectures, your event taxonomy is supposed to serve as the shared contract between services. Each service publishes events using its own ubiquitous language, and consumers translate these into their own models, this translation is the ACL.&lt;/P&gt;
&lt;P&gt;The theory looks beautiful:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;But reality? Most teams end up with this:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Instead of &lt;STRONG&gt;&lt;EM&gt;OrderPaid&lt;/EM&gt;&lt;/STRONG&gt; events that carry business meaning, we get &lt;STRONG&gt;&lt;EM&gt;OrderUpdated&lt;/EM&gt;&lt;/STRONG&gt; events that force every consumer to reconstruct intent by diffing fields. When you change your database schema, say splitting the orders table or switching from SQL to NoSQL, every downstream service breaks because they're all coupled to your internal data model.&lt;/P&gt;
&lt;P&gt;You haven't built an anti-corruption layer. You've built a corruption pipeline that efficiently distributes your internal implementation details across the entire system, forcing you to deploy all services in lock step and eroding the decoupling benefits you were supposed to get.&lt;/P&gt;
&lt;H2&gt;Enter Drasi: Continuous Queries&lt;/H2&gt;
&lt;P&gt;This is where Drasi changes the game. Instead of publishing events and hoping downstream services can make sense of them, Drasi tails the changelog of the data source itself and derives meaning through continuous queries.&lt;/P&gt;
&lt;P&gt;A continuous query in Drasi isn't just a query that runs repeatedly, it's a living, breathing projection that reacts to changes in real-time. Here's the key insight: instead of imperative code that processes events ("when this happens, do that"), you write declarative queries that describe the state you care about ("I want to know about orders that are ready and have drivers waiting").&lt;/P&gt;
&lt;P&gt;Let's break down what makes this powerful:&lt;/P&gt;
&lt;H3&gt;Declarative vs. Imperative&lt;/H3&gt;
&lt;P&gt;Traditional event processing:&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-clear-both"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Drasi continuous query:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Semantic Mapping from Low-Level Changes&lt;/H3&gt;
&lt;P&gt;Drasi excels at transforming database-level changes into business-meaningful events. You're not reacting to "row updated in orders table", you're reacting to "order ready for curbside pickup."&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;This enables the same core benefits of dependency inversion we get from event-driven architectures but at a fraction of the effort.&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;H3&gt;Advanced Temporal Features&lt;/H3&gt;
&lt;P&gt;Remember those developers struggling with "&lt;EM&gt;OrderUpdated&lt;/EM&gt;" events, trying to figure out if something just happened or has been true for a while? Drasi handles this elegantly:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This query only fires when a driver has been waiting for more than 10 minutes, no timestamp tracking, no state machines, no complex event correlation logic, imagine trying to manually implement this in a downstream event consumer. 😱&lt;/P&gt;
&lt;H3&gt;Cross-Source Aggregation Without Code&lt;/H3&gt;
&lt;P&gt;With Drasi, you can have live projections across PostgreSQL, MySQL, SQL Server, and Cosmos DB as if they were a single graph:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;No custom aggregation service. No event stitching logic. No custom downstream datastore to track the sum or keep a materialized projection. Just a query.&lt;/P&gt;
&lt;H3&gt;Continuous Queries as Your Shared Contract&lt;/H3&gt;
&lt;P&gt;Drasi's continuous queries, combined with pre-processing middleware, can form the shared contract that your anti-corruption layer can depend on.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The continuous query becomes your contract. Downstream systems don't know or care whether orders come from PostgreSQL, MongoDB, or a CSV file. They don't know if you normalized your database, denormalized it, or moved to event sourcing. They just consume the query results. Clean, semantic, and stable.&lt;/P&gt;
&lt;H3&gt;Reactions as your Declarative Consumers&lt;/H3&gt;
&lt;P&gt;Drasi does not simply output a stream of raw change diffs, instead it has a library of interchangeable &lt;A href="https://drasi.io/concepts/reactions/" data-test-app-aware-link="" target="_blank"&gt;Reactions&lt;/A&gt;, that can act on the output of continuous queries. These are declared using YAML and can do anything from host a web-socket endpoint that provides a live projection to your UI, to calling an Http endpoint or publishing a message on a queue.&lt;/P&gt;
&lt;H2&gt;Example: The Curbside Pickup System&lt;/H2&gt;
&lt;P&gt;Let's see how this works in Drasi's &lt;A href="https://drasi.io/tutorials/curbside-pickup/" data-test-app-aware-link="" target="_blank"&gt;curbside pickup tutorial&lt;/A&gt;. This example has two independent databases and serves as a great illustration of a real-time projection built from multiple upstream services.&lt;/P&gt;
&lt;H3&gt;The Business Problem&lt;/H3&gt;
&lt;P&gt;A retail system needs to:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Match ready orders with drivers who've arrived at pickup zones&lt;/LI&gt;
&lt;LI&gt;Alert staff when drivers wait more than 10 minutes without their order being ready&lt;/LI&gt;
&lt;LI&gt;Coordinate data from two different systems (retail ops in PostgreSQL, physical ops in MySQL)&lt;/LI&gt;
&lt;/OL&gt;
&lt;H3&gt;Traditional Event-Driven Approach&lt;/H3&gt;
&lt;P&gt;In this architecture, you'd need something like:&lt;/P&gt;
&lt;img /&gt;&lt;img /&gt;&lt;img /&gt;&lt;img /&gt;
&lt;P&gt;That's just the happy path. We haven't handled:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Event ordering issues&lt;/LI&gt;
&lt;LI&gt;Partial failures&lt;/LI&gt;
&lt;LI&gt;Cache invalidation&lt;/LI&gt;
&lt;LI&gt;Service restarts and replay&lt;/LI&gt;
&lt;LI&gt;Duplicate events&lt;/LI&gt;
&lt;LI&gt;Transactional outboxing&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;The Drasi Approach&lt;/H3&gt;
&lt;P&gt;With Drasi, the entire aggregation service above becomes two queries:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Delivery Dashboard Query:&lt;/STRONG&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Wait Detection Query:&lt;/STRONG&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;That's it. No event handlers. No caching. No timers. No state management. Drasi handles:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Change detection across both databases&lt;/LI&gt;
&lt;LI&gt;Correlation between orders and vehicles&lt;/LI&gt;
&lt;LI&gt;Temporal logic for wait detection&lt;/LI&gt;
&lt;LI&gt;Pushing updates to dashboards via SignalR&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The queries define your business logic declaratively. When data changes in either database, Drasi automatically re-evaluates the queries and triggers reactions for any changes in the result set.&lt;/P&gt;
&lt;H2&gt;Drasi: The Non-Invasive Alternative to Legacy System Rewrites&lt;/H2&gt;
&lt;P&gt;Here's perhaps the most compelling argument for Drasi: it doesn't require you to rewrite anything.&lt;/P&gt;
&lt;P&gt;Traditional event sourcing means:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Redesigning your application around events&lt;/LI&gt;
&lt;LI&gt;Rewriting your persistence layer&lt;/LI&gt;
&lt;LI&gt;Implementing transactional outboxes&lt;/LI&gt;
&lt;LI&gt;Managing snapshots and replays&lt;/LI&gt;
&lt;LI&gt;Training your team on new patterns, steep learning curve&lt;/LI&gt;
&lt;LI&gt;Migrating existing data to event streams&lt;/LI&gt;
&lt;LI&gt;Building projection infrastructure&lt;/LI&gt;
&lt;LI&gt;Updating all consumers to handle events&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;As one developer noted about their &lt;A href="https://www.infoq.com/news/2019/09/cqrs-event-sourcing-production/" data-test-app-aware-link="" target="_blank"&gt;event sourcing journey&lt;/A&gt;: &lt;EM&gt;"Event Sourcing is a beautiful solution for high-performance or complex business systems, but you need to be aware that this also introduces challenges most people don't tell you about."&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Drasi's approach:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Keep your existing databases&lt;/LI&gt;
&lt;LI&gt;Keep your existing services&lt;/LI&gt;
&lt;LI&gt;Keep your existing deployment model&lt;/LI&gt;
&lt;LI&gt;Add continuous queries where you need reactive behavior&lt;/LI&gt;
&lt;LI&gt;Get the benefits of dependency inversion&lt;/LI&gt;
&lt;LI&gt;Gradually migrate complexity from code to queries&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;You can start with a single query on a single table and expand from there. No big bang. No feature freeze. No three-month architecture sprint or large multi-year investments, full of risk.&lt;/P&gt;
&lt;H3&gt;Migration Example: From Polling to Reactive&lt;/H3&gt;
&lt;P&gt;Let's say you have a legacy order system where a scheduled job polls for ready orders every 30 seconds:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With Drasi, you:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Point Drasi at your existing database&lt;/LI&gt;
&lt;LI&gt;Write the continuous query&lt;/LI&gt;
&lt;LI&gt;Update your dashboard to receive pushes instead of polls&lt;/LI&gt;
&lt;LI&gt;Turn off the polling job&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Your database hasn't changed. Your order service hasn't changed. You've just added a reactive layer on top that eliminates polling overhead and reduces notification latency from 30 seconds to milliseconds.&lt;/P&gt;
&lt;P&gt;The intellectually satisfying complexity of event sourcing often obscures a simple truth: most systems don't need it. They need to know when interesting things change in their data and react accordingly. They need to combine data from multiple sources without writing bespoke aggregation services. They need to transform low-level changes into business-meaningful events.&lt;/P&gt;
&lt;P&gt;Drasi delivers these capabilities without the ceremony.&lt;/P&gt;
&lt;H2&gt;Where Do We Go from Here?&lt;/H2&gt;
&lt;P&gt;If you're building a new system and your team has deep event sourcing experience embrace the pattern. Event sourcing shines for certain domains.&lt;/P&gt;
&lt;P&gt;But if you're like many teams, trying to add reactive capabilities to existing systems, struggling with data synchronization across services, or finding that your "events" are just CRUD operations in disguise, consider the change-driven approach.&lt;/P&gt;
&lt;P&gt;Start small:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Identify one painful polling loop or batch job&lt;/LI&gt;
&lt;LI&gt;Set up Drasi to monitor those same data sources&lt;/LI&gt;
&lt;LI&gt;Write a continuous query that captures the business condition&lt;/LI&gt;
&lt;LI&gt;Replace the polling with push-based reactions&lt;/LI&gt;
&lt;LI&gt;Measure the reduction in latency, overhead, and code complexity&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;The best architecture isn't the most sophisticated one, it's the one your team can understand, maintain, and evolve. Sometimes that means acknowledging that we've been mid-curving it with overly complex event-driven architectures.&lt;/P&gt;
&lt;P&gt;Drasi and change-driven architecture offer the power of reactive systems without the complexity tax. Your data changes. Your queries notice. Your systems react.&lt;/P&gt;
&lt;P&gt;It makes it a non-event.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Want to explore Drasi further? Check out the &lt;/EM&gt;&lt;A href="https://drasi.io/" data-test-app-aware-link="" target="_blank"&gt;&lt;EM&gt;official documentation&lt;/EM&gt;&lt;/A&gt;&lt;EM&gt; and try the &lt;/EM&gt;&lt;A href="https://drasi.io/tutorials/curbside-pickup/" data-test-app-aware-link="" target="_blank"&gt;&lt;EM&gt;curbside pickup tutorial&lt;/EM&gt;&lt;/A&gt;&lt;EM&gt; to see change-driven architecture in action.&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 17 Dec 2025 22:22:37 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/event-driven-to-change-driven-low-cost-dependency-inversion/ba-p/4478948</guid>
      <dc:creator>CollinBrian</dc:creator>
      <dc:date>2025-12-17T22:22:37Z</dc:date>
    </item>
    <item>
      <title>Building Bridges: Microsoft’s Participation in the Fedora Linux Community</title>
      <link>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/building-bridges-microsoft-s-participation-in-the-fedora-linux/ba-p/4478461</link>
      <description>&lt;P&gt;At Microsoft, we believe that meaningful open source participation is driven by people, not corporations. But companies can - and should - create the conditions that empower individuals to contribute. Over the past year, our Community Linux Engineering team has been doing just that, focusing on Fedora Linux and working closely with the community to improve infrastructure, tooling, and collaboration. This post shares some of the highlights of that work and outlines where we’re headed next.&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Modernizing Fedora Cloud Image Delivery&amp;nbsp;&lt;/H2&gt;
&lt;P&gt;One of our most impactful contributions this year has been expanding the availability of Fedora Cloud images across major cloud platforms. We introduced support for publishing images to both the Azure Community Gallery and Google Cloud Platform—capabilities that didn’t exist before. At the same time, we modernized the existing AWS image publishing process by migrating it to a new, OpenShift-hosted automation framework. This new system, developed by our team and led by engineer Jeremy Cline, streamlines image delivery across all three platforms and positions the project to scale and adapt more easily in the future.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We partnered with Adam Williamson in Fedora QE to extend this tooling to support container image uploads, replacing fragile shell scripts with a robust, maintainable system. Nightly Fedora builds are now uploaded to Azure, with one periodically promoted to “latest” after manual validation and basic functionality testing. This ensures cloud users get up-to-date, ready-to-run images - critical for workloads that demand fast boot times and minimal setup. &amp;nbsp;As you’ll see , we have ideas for improving this testing.&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Enabling Secure Boot on ARM with Sigul&amp;nbsp;&lt;/H2&gt;
&lt;P&gt;Secure Boot is essential for trusted cloud workloads across architectures. Our current focus includes enabling it on ARM-based systems. Fedora currently signs most artifacts with Sigul, but UEFI applications are handled separately via a dedicated x86_64 builder with a smart card. We’re working to enable Sigul-based signing for UEFI applications across architectures, but Sigul is a complex project with unmaintained dependencies. We’ve stepped in to help modernize Sigul, starting with a Rust-based client and a roadmap to re-architect the code and structure for easier maintenance and improved performance. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;This work is about more than just Microsoft’s needs - it’s about enabling Secure Boot support out of the box, like what users expect on x86_64 systems.&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Bringing Inspektor Gadget to Fedora&amp;nbsp;&lt;/H2&gt;
&lt;P&gt;Inspektor Gadget is an eBPF-based toolkit for kernel instrumentation, enabling powerful observability use cases like performance profiling and syscall tracing. &lt;SPAN data-olk-copy-source="MessageBody"&gt;The Community Linux Engineering team consulted with the Inspektor Gadget maintainers at Microsoft about putting the project in Fedora.&amp;nbsp; This led to the maintainers natively packaging it for Fedora and assuming ongoing maintenance of the package.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;We are encouraging teams to become active Fedora participants, to maintain their own packages, and to engage directly with the community. We believe in bi-directional feedback: upstream contributions should benefit both the project and the contributors.&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Azure VM Utils: Simplifying Cloud Enablement&amp;nbsp;&lt;/H2&gt;
&lt;P&gt;To streamline Fedora’s compatibility with Azure, we’ve introduced a package called azure-vm-utils. It consolidates Udev rules and low-level utilities that make Fedora work better on Azure infrastructure, particularly with NVMe devices. This package is a step toward greater transparency and maintainability and could serve as a model for other cloud providers.&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Fedora WSL: A Layer 9 Success&amp;nbsp;&lt;/H2&gt;
&lt;P&gt;Fedora is now officially available in the Windows Subsystem for Linux (WSL) catalog - a milestone that required both technical and organizational effort. While the engineering work was substantial, the real challenge was navigating the legal and governance landscape. This success reflects deep collaboration between Fedora leadership, Red Hat, and Microsoft.&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Looking Ahead: Strategic Participation and Testing&amp;nbsp;&lt;/H2&gt;
&lt;P&gt;We’re not stopping here. Our roadmap includes:&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Replacing Sigul&lt;/STRONG&gt; with a modern, maintainable signing infrastructure.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Expanding participation&lt;/STRONG&gt; in Fedora SIGs (Cloud, Go, Rust) where Microsoft has relevant expertise.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Improving automated testing&lt;/STRONG&gt; using Microsoft’s open source LISA framework to validate Fedora images at cloud scale.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Enhancing the Fedora-on-Azure experience&lt;/STRONG&gt;, including exploring mirrors within Azure and expanding agent/extension support.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;We’re also working closely with the Azure Linux team, which is aligning its development model with Fedora - much like RHEL does. while Azure Linux has used some Fedora sources in the past, their upcoming 4.0 release is intended to align much more closely with Fedora as an upstream&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;A Call for Collaboration&amp;nbsp;&lt;/H2&gt;
&lt;P&gt;While contributing patches is a good start, we intend to do much more. We aim to be a deeply involved member of the Fedora community - participating in SIGs, maintaining packages, and listening to feedback. If you have ideas for where Microsoft can make strategic investments that benefit Fedora, we want to hear them. &amp;nbsp;You’ll find us alongside you in Fedora meetings, forums, and at conferences like Flock.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Open source thrives when contributors bring their whole selves to the table. At Microsoft, we’re working to ensure our engineers can do just that - by aligning company goals with community value.&lt;/P&gt;
&lt;P&gt;(This post is based on a &lt;A class="lia-external-url" href="https://www.youtube.com/live/YhoFPG7Ack0?si=v5KH_0nRXl_bKtBD&amp;amp;t=4290" target="_blank" rel="noopener"&gt;talk delivered at Flock to Fedora 2025&lt;/A&gt;.)&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 17 Dec 2025 13:47:31 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/building-bridges-microsoft-s-participation-in-the-fedora-linux/ba-p/4478461</guid>
      <dc:creator>bexelbie</dc:creator>
      <dc:date>2025-12-17T13:47:31Z</dc:date>
    </item>
    <item>
      <title>Beyond the Chat Window: How Change-Driven Architecture Enables Ambient AI Agents</title>
      <link>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/beyond-the-chat-window-how-change-driven-architecture-enables/ba-p/4475026</link>
      <description>&lt;P&gt;AI agents are everywhere now. Powering chat interfaces, answering questions, helping with code. We've gotten remarkably good at this conversational paradigm. But while the world has been focused on chat experiences, something new is quietly emerging: ambient agents. These aren't replacements for chat, they're an entirely new category of AI system that operates in the background, sensing, processing, and responding to the world in real time. And here's the thing, this is a new frontier. The infrastructure we need to build these systems barely exists yet.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Or at least, it didn't until now.&lt;/P&gt;
&lt;H4 aria-level="2"&gt;&lt;SPAN class="lia-text-color-15"&gt;Two Worlds: Conversational and Ambient&amp;nbsp;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Let me paint you a picture of the conversational AI paradigm we know well.&amp;nbsp;You open a chat window. You type a question. You wait. The AI responds. Rinse and repeat.&amp;nbsp;It's&amp;nbsp;the digital equivalent of having a brilliant assistant sitting at a desk, ready to help when you tap them on the shoulder.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Now imagine a completely different kind of assistant. One that watches for&amp;nbsp;important changes,&amp;nbsp;anticipates&amp;nbsp;needs, and springs into action without being asked.&amp;nbsp;That's&amp;nbsp;the promise of ambient agents.&amp;nbsp;AI systems that, as&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://blog.langchain.com/introducing-ambient-agents/" target="_blank"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;LangChain puts it&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;"listen to an event stream and act on&amp;nbsp;it&amp;nbsp;accordingly, potentially acting on multiple events at a time."&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This&amp;nbsp;isn't&amp;nbsp;an evolution of&amp;nbsp;chat;&amp;nbsp;it's&amp;nbsp;a fundamentally different interaction paradigm. Both have their place. Chat is great for collaboration and back-and-forth reasoning. Ambient agents excel at continuous monitoring and autonomous response. Instead of human-initiated conversations, ambient agents&amp;nbsp;operate&amp;nbsp;through detecting changes in upstream systems and&amp;nbsp;maintaining&amp;nbsp;context across time without constant prompting.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The use cases are compelling and distinct from chat. Imagine&amp;nbsp;a&amp;nbsp;project management&amp;nbsp;assistant that&amp;nbsp;operates&amp;nbsp;in two modes: you can&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;chat&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;with it to&amp;nbsp;ask,&amp;nbsp;"summarize&amp;nbsp;project status",&amp;nbsp;but it also runs in the background,&amp;nbsp;constantly&amp;nbsp;monitoring&amp;nbsp;new tickets that are created, or deployment pipelines that fail, automatically reassigning tasks. Or consider a DevOps agent that you can query conversationally ("what's our current CPU usage?") but also&amp;nbsp;monitors&amp;nbsp;your infrastructure continuously, detecting anomalies and starting remediation before you even know&amp;nbsp;there's&amp;nbsp;a problem.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4 aria-level="2"&gt;&lt;SPAN class="lia-text-color-15"&gt;The Challenge: Real-Time Change Detection&amp;nbsp;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Here's&amp;nbsp;where building ambient agents gets tricky. While chat-based agents work perfectly within the request-response paradigm, ambient agents need something entirely different: continuous monitoring and real-time change detection. How do you efficiently detect changes across multiple data sources? How do you avoid the performance nightmare of constant polling? How do you ensure your agent reacts instantly when something critical happens?&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Developers trying to build ambient agents hit the same wall: creating a reliable, scalable change detection system is&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;hard&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;. You either end up with:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Polling hell&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;:&lt;/STRONG&gt; Constantly querying databases, burning through resources, and still missing changes between polls&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Legacy system rewrites&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;:&lt;/STRONG&gt;&amp;nbsp;Massive expensive multi-year projects to re-write legacy systems so that they produce&amp;nbsp;domain events&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Webhook spaghetti&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;:&lt;/STRONG&gt; Managing dozens of event sources, each with different formats and reliability guarantees&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This is where the story takes an interesting turn.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4 aria-level="2"&gt;&lt;SPAN class="lia-text-color-15"&gt;Enter&amp;nbsp;Drasi: The Change Detection Engine You Didn't Know You Needed&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;A href="https://drasi.io/" target="_blank"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Drasi&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;is not another AI framework. Instead, it solves the problem that ambient agents&amp;nbsp;need&amp;nbsp;solved: intelligent change detection. Think of it as the sensory system for your AI agents,&amp;nbsp;the infrastructure that lets them perceive changes in the world.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Drasi&amp;nbsp;is built around three simple components:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Sources&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;:&lt;/STRONG&gt;&amp;nbsp;Connectivity to the systems that&amp;nbsp;Drasi&amp;nbsp;can&amp;nbsp;observe&amp;nbsp;as sources of change&amp;nbsp;(PostgreSQL, MySQL, Cosmos DB, Kubernetes, EventHub)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Continuous Queries&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;:&lt;/STRONG&gt; Graph-based queries (using Cypher/GQL) that&amp;nbsp;monitor&amp;nbsp;for specific change patterns&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Reactions&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;:&lt;/STRONG&gt;&amp;nbsp;What happens when a continuous query&amp;nbsp;detects&amp;nbsp;changes, or lack thereof&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;But&amp;nbsp;here's&amp;nbsp;the killer feature:&amp;nbsp;Drasi&amp;nbsp;doesn't&amp;nbsp;just detect that&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;something&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;changed. It understands&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;what&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;changed and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;why it matters&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;, and even if something should have changed but did not. Using continuous queries, you can define complex conditions that your agents care about, and&amp;nbsp;Drasi&amp;nbsp;handles all the plumbing to deliver those insights in real time.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4 aria-level="2"&gt;&lt;SPAN class="lia-text-color-15"&gt;The Bridge:&amp;nbsp;langchain-drasi&amp;nbsp;Integration&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Now, detecting changes is only&amp;nbsp;part of the challenge. You need to connect those changes to your AI agents in a way that makes sense.&amp;nbsp;That's&amp;nbsp;where&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/drasi-project/langchain-drasi" target="_blank"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;langchain-drasi&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;comes in,&amp;nbsp;a purpose-built integration that bridges&amp;nbsp;Drasi's&amp;nbsp;change detection with&amp;nbsp;LangChain's&amp;nbsp;agent frameworks. It achieves this by&amp;nbsp;leveraging&amp;nbsp;the&amp;nbsp;Drasi&amp;nbsp;MCP Reaction, which exposes&amp;nbsp;Drasi&amp;nbsp;continuous queries as&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://modelcontextprotocol.io/specification/2025-06-18/server/resources" target="_blank"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;MCP resources&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The integration provides&amp;nbsp;a&amp;nbsp;simple&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Tool&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;that agents can use to:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Discover&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;available queries automatically&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Read&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;current query results on demand&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Subscribe&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;to real-time updates that flow directly into agent memory and workflow&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Here's what this looks like in practice:&lt;/SPAN&gt;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="width: 100%; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;PRE&gt;&lt;SPAN class="lia-text-color-9"&gt;from &lt;/SPAN&gt;langchain_drasi &lt;SPAN class="lia-text-color-9"&gt;import&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN class="lia-text-color-9"&gt;&amp;nbsp;&lt;/SPAN&gt;create_drasi_tool,&amp;nbsp;MCPConnectionConfig&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:2,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:270,&amp;quot;335572071&amp;quot;:4,&amp;quot;335572072&amp;quot;:1,&amp;quot;335572073&amp;quot;:4278190080,&amp;quot;335572075&amp;quot;:4,&amp;quot;335572076&amp;quot;:4,&amp;quot;335572077&amp;quot;:4278190080,&amp;quot;335572079&amp;quot;:4,&amp;quot;335572080&amp;quot;:1,&amp;quot;335572081&amp;quot;:4278190080,&amp;quot;335572083&amp;quot;:4,&amp;quot;335572084&amp;quot;:4,&amp;quot;335572085&amp;quot;:4278190080,&amp;quot;469789798&amp;quot;:&amp;quot;single&amp;quot;,&amp;quot;469789802&amp;quot;:&amp;quot;single&amp;quot;,&amp;quot;469789806&amp;quot;:&amp;quot;single&amp;quot;,&amp;quot;469789810&amp;quot;:&amp;quot;single&amp;quot;}"&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN class="lia-text-color-11"&gt;# Configure connection to Drasi MCP server&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN data-contrast="none"&gt;mcp_config&amp;nbsp;=&amp;nbsp;MCPConnectionConfig(&lt;/SPAN&gt;&lt;SPAN class="lia-text-color-20"&gt;server_url&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;=&lt;/SPAN&gt;&lt;SPAN class="lia-text-color-12"&gt;"http://localhost:8083"&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;)&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN class="lia-text-color-11"&gt;# Create the tool with notification handlers&lt;/SPAN&gt; &lt;BR /&gt;&lt;SPAN data-contrast="none"&gt;drasi_tool&amp;nbsp;=&amp;nbsp;create_drasi_tool(&lt;/SPAN&gt; &lt;BR /&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class="lia-text-color-20"&gt;mcp_config&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;=mcp_config,&lt;/SPAN&gt; &lt;BR /&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class="lia-text-color-20"&gt;notification_handlers&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;=[buffer_handler,&amp;nbsp;console_handler]&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:2,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:270,&amp;quot;335572071&amp;quot;:4,&amp;quot;335572072&amp;quot;:1,&amp;quot;335572073&amp;quot;:4278190080,&amp;quot;335572075&amp;quot;:4,&amp;quot;335572076&amp;quot;:4,&amp;quot;335572077&amp;quot;:4278190080,&amp;quot;335572079&amp;quot;:4,&amp;quot;335572080&amp;quot;:1,&amp;quot;335572081&amp;quot;:4278190080,&amp;quot;335572083&amp;quot;:4,&amp;quot;335572084&amp;quot;:4,&amp;quot;335572085&amp;quot;:4278190080,&amp;quot;469789798&amp;quot;:&amp;quot;single&amp;quot;,&amp;quot;469789802&amp;quot;:&amp;quot;single&amp;quot;,&amp;quot;469789806&amp;quot;:&amp;quot;single&amp;quot;,&amp;quot;469789810&amp;quot;:&amp;quot;single&amp;quot;}"&gt;&amp;nbsp;&lt;BR /&gt;)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN data-contrast="none"&gt;&lt;BR /&gt;&lt;SPAN class="lia-text-color-11"&gt;# Now your agent can discover and subscribe to data changes&lt;/SPAN&gt;&lt;/SPAN&gt; &lt;BR /&gt;&lt;SPAN class="lia-text-color-11"&gt;&lt;SPAN class="lia-text-color-11"&gt;# No more polling, no more webhooks, just reactive intelligence&lt;/SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/PRE&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The beauty is in the notification handlers:&amp;nbsp;pre-built components that&amp;nbsp;determine&amp;nbsp;how changes flow into your agent's consciousness:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;BufferHandler&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;: Queues changes for sequential processing&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;LangGraphMemoryHandler&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;: Automatically&amp;nbsp;integrates&amp;nbsp;changes&amp;nbsp;into agent checkpoints&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;LoggingHandler&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;:&amp;nbsp;Integrates with standard logging infrastructure&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This&amp;nbsp;isn't&amp;nbsp;just&amp;nbsp;plumbing;&amp;nbsp;it's&amp;nbsp;the foundation for what we might call "change-driven architecture" for AI systems.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4 aria-level="2"&gt;&lt;SPAN class="lia-text-color-15"&gt;Example: The&amp;nbsp;Seeker&amp;nbsp;Agent&amp;nbsp;Has Entered the Chat&amp;nbsp;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Let's&amp;nbsp;make this concrete with my favorite example from the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;langchain-drasi&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;repository: a&amp;nbsp;hide and seek&amp;nbsp;inspired&amp;nbsp;non-player character (NPC)&amp;nbsp;AI&amp;nbsp;agent that&amp;nbsp;seeks human players in a multi-player game environment.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H5 aria-level="3"&gt;&lt;SPAN class="lia-text-color-15"&gt;The Scenario&amp;nbsp;&lt;/SPAN&gt;&lt;/H5&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Imagine a game where players move around a 2D map, updating their positions in a PostgreSQL database. But&amp;nbsp;here's&amp;nbsp;the twist: the NPC agent&amp;nbsp;doesn't&amp;nbsp;have omniscient vision. It can only detect players under specific conditions:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Stationary targets&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;: &lt;/STRONG&gt;When a player&amp;nbsp;doesn't&amp;nbsp;move for more than 3 seconds (they're exposed)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Frantic movement&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;: &lt;/STRONG&gt;When a player moves more than once in less than a second (panicking reveals your position)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This creates interesting strategic&amp;nbsp;gameplay,&amp;nbsp;players must balance staying still (safe from detection but vulnerable if found) with moving carefully (one move per second is the sweet spot). The NPC agent&amp;nbsp;seeks&amp;nbsp;based on these glimpses of player activity.&amp;nbsp;These detection rules are defined as&amp;nbsp;Drasi&amp;nbsp;continuous queries that&amp;nbsp;monitor&amp;nbsp;the player positions table.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;For reference, these are the two continuous queries we will use:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;When a player&amp;nbsp;doesn't&amp;nbsp;move for more than 3 seconds, this is&amp;nbsp;a great example&amp;nbsp;of detecting the absence of change use the&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://drasi.io/reference/query-language/drasi-custom-functions/#drasi-future-functions" target="_blank"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;trueLater function&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;&lt;SPAN class="lia-text-color-9"&gt;MATCH &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; (&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;p&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;:&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;player&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class="lia-text-color-9"&gt;{&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;type&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;:&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class="lia-text-color-12"&gt;'human'&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class="lia-text-color-9"&gt;}&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;)&lt;/SPAN&gt; &lt;BR /&gt;&lt;SPAN class="lia-text-color-9"&gt;WHERE&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;drasi&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN class="lia-text-color-10"&gt;true&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Later&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;(&lt;/SPAN&gt; &lt;BR /&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;drasi&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;changeDateTime&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;(&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;p&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;) &amp;lt;= (&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;datetime&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;realtime&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;() -&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;duration&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;(&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class="lia-text-color-9"&gt;{&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;seconds&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;:&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;3&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class="lia-text-color-9"&gt;}&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;)),&amp;nbsp;&lt;/SPAN&gt; &lt;BR /&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;drasi&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;changeDateTime&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;(&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;p&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;) +&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;duration&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;(&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class="lia-text-color-9"&gt;{&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;seconds&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;:&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class="lia-text-color-11"&gt;3&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class="lia-text-color-9"&gt;}&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;)&lt;/SPAN&gt; &lt;BR /&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; )&lt;/SPAN&gt; &lt;BR /&gt;&lt;SPAN class="lia-text-color-9"&gt;RETURN &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;p&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;id&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;,&lt;/SPAN&gt; &lt;BR /&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;p&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;x&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;,&lt;/SPAN&gt; &lt;BR /&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;p&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;y&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:2,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:270}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;When a player moves more than once in less than a second is an example of using the&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://drasi.io/reference/query-language/drasi-custom-functions/#drasi-delta-functions" target="_blank"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;previousValue function&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;to compare&amp;nbsp;that current state with a prior state:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;&lt;SPAN class="lia-text-color-9"&gt;MATCH&lt;/SPAN&gt; &lt;BR /&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; (&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;p&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;:&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;player&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class="lia-text-color-9"&gt;{&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;type&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;:&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class="lia-text-color-12"&gt;'human'&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class="lia-text-color-9"&gt;}&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;)&lt;/SPAN&gt; &lt;BR /&gt;&lt;SPAN class="lia-text-color-9"&gt;WHERE&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;drasi&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;changeDateTime&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;(&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;p&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;).&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;epochMillis&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;-&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;drasi&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;previousValue&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;(&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;drasi&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;changeDateTime&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;(&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;p&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;).&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;epochMillis&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;) &amp;lt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class="lia-text-color-11"&gt;1000&lt;/SPAN&gt; &lt;BR /&gt;&lt;SPAN class="lia-text-color-9"&gt;RETURN &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;p&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;id&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;,&lt;/SPAN&gt; &lt;BR /&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;p&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;x&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;,&lt;/SPAN&gt; &lt;BR /&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;p&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;y&lt;/SPAN&gt; &lt;/PRE&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Here's the neat part: you can dynamically adjust the game's difficulty by adding or removing queries with different conditions; no code changes required, just deploy new Drasi queries.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The traditional approach would have your agent constantly polling the data source checking these conditions: "Any player moves? How about now? Now?&amp;nbsp;Now?"&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4 aria-level="3"&gt;&lt;SPAN class="lia-text-color-15"&gt;The Workflow in Action&amp;nbsp;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The agent&amp;nbsp;operates&amp;nbsp;through a&amp;nbsp;LangGraph&amp;nbsp;based&amp;nbsp;state machine with two distinct phases:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;1. &lt;STRONG&gt;Setup Phase (First Run Only)&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Setup&amp;nbsp;queries&amp;nbsp;prompt&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt; &lt;/STRONG&gt;- Prompts the&amp;nbsp;AI model&amp;nbsp;to discover available&amp;nbsp;Drasi&amp;nbsp;queries&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Setup&amp;nbsp;queries&amp;nbsp;call&amp;nbsp;model&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt; -&amp;nbsp;AI model&amp;nbsp;calls the&amp;nbsp;Drasi&amp;nbsp;tool with discover operation&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Setup&amp;nbsp;queries&amp;nbsp;tools&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt; - Executes the&amp;nbsp;Drasi&amp;nbsp;tool calls to subscribe to relevant queries&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;This phase loops until the&amp;nbsp;AI model&amp;nbsp;has discovered and subscribed to all relevant queries&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;2. &lt;STRONG&gt;Main Seeking Loop (Continuous)&lt;/STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);" data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Check&amp;nbsp;sensors&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt; -&amp;nbsp;Consumes any&amp;nbsp;new&amp;nbsp;Drasi&amp;nbsp;notifications&amp;nbsp;from the&amp;nbsp;buffer into&amp;nbsp;the workflow state&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Evaluate&amp;nbsp;targets&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt; - Uses&amp;nbsp;AI model&amp;nbsp;to parse sensor data and extract target positions&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Select&amp;nbsp;and&amp;nbsp;plan&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt; - Selects closest target and plans path&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Execute&amp;nbsp;move&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt; &lt;/STRONG&gt;- Executes the next move via game API&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Loop continues indefinitely, reacting to new notifications&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;No polling. No delays. No wasted resources checking positions that&amp;nbsp;don't&amp;nbsp;meet the detection criteria. Just pure, reactive intelligence flowing from meaningful data changes to agent actions. The continuous queries act as intelligent filters,&amp;nbsp;only alerting the agent when&amp;nbsp;relevant changes occur.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;A href="https://github.com/drasi-project/langchain-drasi/tree/main/examples/hide_and_seek" target="_blank"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Click here for the full implementation&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4 aria-level="2"&gt;&lt;SPAN class="lia-text-color-15"&gt;The Bigger Picture: Change-Driven Architecture&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;What&amp;nbsp;we're&amp;nbsp;seeing with&amp;nbsp;Drasi&amp;nbsp;and ambient agents&amp;nbsp;isn't&amp;nbsp;just a new tool,&amp;nbsp;it's&amp;nbsp;a new architectural pattern for AI systems. The core idea is profound:&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;AI agents can react to the world changing, not just wait to be asked about it.&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;This pattern enables entirely new categories of applications that complement traditional chat interfaces.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The example might seem playful, but it&amp;nbsp;demonstrates&amp;nbsp;that&amp;nbsp;AI agents&amp;nbsp;can&amp;nbsp;perceive and react to their environment in real time. Today&amp;nbsp;it's&amp;nbsp;seeking players in a game. Tomorrow it could be:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Managing city traffic flows based on real-time sensor data&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Coordinating disaster response as situations evolve&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Optimizing&amp;nbsp;supply chains as demand patterns shift&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Protecting networks as threats&amp;nbsp;emerge&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The change detection infrastructure is here. The patterns are&amp;nbsp;emerging. The only question is: what will you build?&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4 aria-level="2"&gt;&lt;SPAN class="lia-text-color-15"&gt;Where to Go&amp;nbsp;from&amp;nbsp;Here&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Ready to dive deeper? Here are your next steps:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Explore&amp;nbsp;Drasi&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;: &lt;/STRONG&gt;Head to&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://drasi.io/" target="_blank"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;drasi.io&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt; and discover the power of the change detection platform&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Try&amp;nbsp;langchain-drasi&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;:&lt;/STRONG&gt; Clone the&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/drasi-project/langchain-drasi" target="_blank"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;GitHub repository&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;and run the&amp;nbsp;Hide-and-Seek&amp;nbsp;example yourself&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Join the conversation&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;:&lt;/STRONG&gt; The space is new and needs diverse perspectives&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;. Join the&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;community&amp;nbsp;on&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://discord.gg/AX7FneckBq" target="_blank"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Discord&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Let us know if&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;you&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;have&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;built ambient agents&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;and&amp;nbsp;w&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;hat&amp;nbsp;challenges&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;you fac&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;ed&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;with real-time change detection&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 03 Dec 2025 23:09:40 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/beyond-the-chat-window-how-change-driven-architecture-enables/ba-p/4475026</guid>
      <dc:creator>CollinBrian</dc:creator>
      <dc:date>2025-12-03T23:09:40Z</dc:date>
    </item>
    <item>
      <title>Project Pavilion Presence at KubeCon NA 2025</title>
      <link>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/project-pavilion-presence-at-kubecon-na-2025/ba-p/4472904</link>
      <description>&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;KubeCon + CloudNativeCon NA took place in Atlanta, Georgia, from&amp;nbsp;10-13 November, and continued to&amp;nbsp;highlight the ongoing growth of the open source, cloud-native community. Microsoft participated throughout the event and supported several open source projects in the Project Pavilion. Microsoft’s involvement reflected our commitment to upstream collaboration, open governance, and enabling developers to build secure, scalable and portable applications across the ecosystem.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The Project Pavilion serves as a dedicated, vendor-neutral space on the KubeCon show floor reserved for CNCF projects. Unlike the corporate booths, it focuses entirely on open source collaboration. It brings maintainers and contributors together with end users for hands-on demos, technical discussions, and roadmap insights. This space helps attendees discover emerging technologies and understand how different projects fit into the cloud-native ecosystem. It plays a critical role for idea exchanges, resolving challenges and strengthening collaboration across CNCF approved technologies.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;Why Our Presence Matters&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;KubeCon NA remains one of the most influential gatherings for developers and organizations shaping the future of cloud-native computing. For Microsoft, participating in the Project Pavilion helps advance our goals of:&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Open governance and community-driven innovation&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Scaling vital cloud-native technologies&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Secure and sustainable operations&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Learning from practitioners and adopters&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Enabling developers across clouds and platforms&lt;/SPAN&gt; &lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Many of Microsoft’s products and cloud services are built on or aligned with CNCF and open-source technologies. Being active within these communities ensures that we are contributing back to the ecosystem we depend on and designing by collaborating with the community, not just for it.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 aria-level="2"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;Microsoft-Supported Pavilion Projects&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;H3&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Heading 3 Char"&gt;c&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Heading 3 Char"&gt;ontainerd&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Heading 3 Char"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Representative: &lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Wei Fu&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The &lt;/SPAN&gt;&lt;A href="https://www.cncf.io/projects/containerd/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;containerd&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt; team engaged with project maintainers and ecosystem partners to explore solutions for improving AI model workflows. A key focus was the challenge of handling large OCI artifacts (often 500+ GiB) used in AI training workloads. Current image-pulling flows require containerd to fetch and fully unpack blobs, which significantly delays pod startup for large models.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Collaborators from Docker, NTT, and ModelPack discussed a non-unpacking workflow that would allow training workloads to consume model data directly. The team plans to prototype this behavior as an experimental feature in containerd. Additional discussions included updates related to nerdbox and next steps for the erofs snapshotter.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="color: rgb(30, 30, 30); font-size: 28px;" data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;C&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;opa&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;cetic&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN style="color: rgb(30, 30, 30); font-size: 28px;" data-ccp-props="{&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Representative: &lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Joshua Duffney &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The &lt;/SPAN&gt;&lt;A href="https://www.cncf.io/projects/copa/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;C&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;opa&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt; booth attracted roughly 75 attendees, with strong representation from federal agencies and financial institutions, a sign of growing adoption in regulated industries. A lightning talk delivered at the conference significantly boosted traffic and engagement. Key feedback and insights included:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;High interest in customizable package update sources&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Demand for application-level patching beyond OS-level updates&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Need for clearer CI/CD integration patterns&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Expectations around in-cluster image patching&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Questions about runtime support, including Podman&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The conversations revealed several documentation gaps and feature opportunities that will inform Copa’s roadmap and future enablement efforts.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3 aria-level="3"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;Drasi&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Representative: &lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Nandita Valsan&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;KubeCon NA 2025 marked &lt;/SPAN&gt;&lt;A href="https://www.cncf.io/projects/drasi/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Drasi&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;’s first in-person presence since its launch in October 2024 and its entry into the CNCF Sandbox in early 2025. With multiple kiosk slots, the team interacted with ~70 visitors across shifts. Engagement highlights included:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;New community members joining the Drasi Discord and starring GitHub repositories&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Meaningful discussions with observability and incident management vendors interested in change-driven architectures&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Positive reception to Aman Singh’s conference talk, which led attendees back to the booth for deeper technical conversations&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Post-event follow-ups are underway with several sponsors and partners to explore collaboration opportunities.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3 aria-level="3"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;Flatcar Container Linux&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Representatives: &lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Sudhanva Huruli and Vamsi Kavuru &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The &lt;A href="https://www.cncf.io/projects/flatcar-container-linux/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Flatcar&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt; project had some fantastic conversations at the pavilion. Attendees were eager to learn about bare metal provisioning, GPU support for AI workloads, and how Flatcar’s fully automated build and test process keeps things simple and developer friendly. Questions around Talos vs. Flatcar and CoreOS sparked lively discussions, with the team emphasizing Flatcar’s usability and independence from an OS-level API. Interest came from government agencies and financial institutions, and the preview of Flatcar on AKS opened the door to deeper conversations about real-world adoption. The Project Pavilion proved to be the perfect venue for authentic, technical exchanges.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3 aria-level="3"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;Flux&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Representatives: &lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Dipti Pai&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;The &lt;A class="lia-external-url" href="https://www.cncf.io/projects/flux/" target="_blank"&gt;Flux&lt;/A&gt; booth was active throughout all three days of the Project Pavilion, where Microsoft joined other maintainers to highlight new capabilities in &lt;STRONG data-start="323" data-end="335"&gt;Flux 2.7&lt;/STRONG&gt;, including improved multi-tenancy, enhanced observability, and streamlined cloud-native integrations. Visitors shared real-world GitOps experiences, both successes and challenges, which provided valuable insights for the project’s ongoing development. Microsoft’s involvement reinforced strong collaboration within the Flux community and continued commitment to advancing GitOps practices.&lt;/P&gt;
&lt;H3 aria-level="3"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;Headlamp&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Representatives: &lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Joaquim Rocha, Will Case, and Oleksandr Dubenko &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.cncf.io/projects/headlamp/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Headlamp&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt; had a booth for all three days of the conference, engaging with both longstanding users and first-time attendees. The increased visibility from becoming a Kubernetes sub-project was evident, with many attendees sharing their usage patterns across large tech organizations and smaller industrial teams.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The booth enabled maintainers to:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Gather insights into how teams use Headlamp in different environments&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Introduce Headlamp to new users discovering it via talks or hallway conversations&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Build stronger connections with the community and understand evolving needs&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 aria-level="3"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;Inspektor Gadget&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Representatives:&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Jose Blanquicet and Mauricio Vásquez Bernal&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Hosting a half-day kiosk session, &lt;/SPAN&gt;&lt;A href="https://www.cncf.io/projects/inspektor-gadget/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Inspektor Gadget&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt; welcomed approximately 25 visitors. Attendees included newcomers interested in learning the basics and existing users looking for updates. The team showcased new capabilities, including the tcpdump gadget and Prometheus metrics export, and invited visitors to the upcoming contribfest to encourage participation.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3 aria-level="3"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;Istio&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Representatives: &lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Keith Mattix, Jackie Maertens, Steven Jin Xuan, Niranjan Shankar, and Mike Morris &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The &lt;/SPAN&gt;&lt;A href="https://www.cncf.io/projects/istio/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Istio&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt; booth continued to attract a mix of experienced adopters and newcomers seeking guidance. Technical discussions focused on:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Enhancements to multicluster support in ambient mode&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Migration paths from sidecars to ambient&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Improvements in Gateway API availability and usage&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Performance and operational benefits for large-scale deployments&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Users, including several Azure customers, expressed appreciation for Microsoft’s sustained investment in Istio as part of their service mesh infrastructure.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3 aria-level="3"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;Notary Project&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;Representative: &lt;/STRONG&gt;Feynman Zhou and Toddy Mladenov&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The &lt;/SPAN&gt;&lt;A href="https://www.cncf.io/projects/notary-project/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Notary Project&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt; booth saw significant interest from practitioners concerned with software supply chain security. Attendees discussed signing, verification workflows, and integrations with Azure services and Kubernetes clusters. The conversations will influence upcoming improvements across Notary Project and Ratify, reinforcing Microsoft’s commitment to secure artifacts and verifiable software distribution.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;Open Policy Agent (OPA)&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt; - Gatekeeper&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Representative: &lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Jaydip Gabani &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The &lt;/SPAN&gt;&lt;A href="https://www.cncf.io/projects/open-policy-agent-opa/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;OPA/Gatekeeper&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt; booth enabled maintainers to connect with both new and existing users to explore use cases around policy enforcement, Rego/CEL authoring, and managing large policy sets. Many conversations surfaced opportunities around simplifying best practices and reducing management complexity. The team also promoted participation in an ongoing Gatekeeper/OPA survey to guide future improvements.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3 aria-level="3"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;ORAS&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;Representative: &lt;/STRONG&gt;Feynman Zhou and Toddy Mladenov&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.cncf.io/projects/oras/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;ORAS&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt; engaged developers interested in OCI artifacts beyond container images which includes AI/ML models, metadata, backups, and multi-cloud artifact workflows. Attendees appreciated ORAS’s ecosystem integrations and found the booth examples useful for understanding how artifacts are tagged, packaged, and distributed. Many users shared how they leverage ORAS with Azure Container Registry and other OCI-compatible registries.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3 aria-level="3"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;Radius&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;Representative: &lt;/STRONG&gt;Zach Casper&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The &lt;/SPAN&gt;&lt;A href="https://radapp.io/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Radius&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt; booth attracted the attention of platform engineers looking for ways to simplify their developer's experience while being able to enforce enterprise-grade infrastructure and security best practices. Attendees saw demos on deploying a database to Kubernetes and using managed databases from AWS and Azure without modifying the application deployment logic. They also saw a preview of Radius integration with GitHub Copilot enabling AI coding agents to autonomously deploy and test applications in the cloud.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 aria-level="2"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;Conclusion&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;KubeCon + CloudNativeCon North America 2025 reinforced the essential role of open source communities in driving innovation across cloud native technologies. Through the Project Pavilion, Microsoft teams were able to exchange knowledge with other maintainers, gather user feedback, and support projects that form foundational components of modern cloud infrastructure. Microsoft remains committed to building alongside the community and strengthening the ecosystem that powers so much of today’s cloud-native development.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;For anyone interested in exploring or contributing to these open source efforts, please reach out directly to each project’s community to get involved, or contact Lexi Nadolski at &lt;/SPAN&gt;&lt;A href="mailto:lexinadolski@microsoft.com" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;lexinadolski@microsoft.com&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt; for more information.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 04 Dec 2025 11:52:33 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/project-pavilion-presence-at-kubecon-na-2025/ba-p/4472904</guid>
      <dc:creator>lexinadolski</dc:creator>
      <dc:date>2025-12-04T11:52:33Z</dc:date>
    </item>
    <item>
      <title>Azure Linux: Driving Security in the Era of AI Innovation</title>
      <link>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/azure-linux-driving-security-in-the-era-of-ai-innovation/ba-p/4471034</link>
      <description>&lt;P&gt;Microsoft is advancing cloud and AI innovation with a clear focus on security, quality, and responsible practices. At Ignite 2025, Azure Linux reflects that commitment. As Microsoft’s ubiquitous Linux OS, it powers critical services and serves as the hub for security innovation. This year’s announcements, Azure Linux with OS Guard public preview and GA of pod sandboxing, reinforce security as one of our core priorities, helping customers build and run workloads with confidence in an increasingly complex threat landscape.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Announcing OS Guard Public Preview&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;We’re excited to announce the public preview of Azure Linux with&amp;nbsp;OS Guard at Ignite 2025! OS Guard delivers a hardened, immutable container host built on the FedRAMP-certified Azure Linux base image. It introduces a significantly streamlined footprint with approximately 100 fewer packages than the standard Azure Linux image, reducing the attack surface and improving performance. FIPS mode is enforced by default, ensuring compliance for regulated workloads right out of the box. Additional security features include &lt;EM&gt;dm-verity &lt;/EM&gt;for filesystem immutability, Trusted Launch backed by vTPM-secured keys, and seamless integration with AKS for container workloads. Built with upstream transparency and active Microsoft contributions, OS Guard provides a secure foundation for containerized applications while maintaining operational simplicity.&lt;/P&gt;
&lt;P&gt;During the preview period, code integrity and mandatory access Control (SELinux) are enabled in &lt;STRONG&gt;audit mode&lt;/STRONG&gt;, allowing customers to validate policies and prepare for enforcement without impacting workloads.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;General Availability: Pod Sandboxing for stronger isolation on AKS&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;We’re also announcing the GA of pod sandboxing on AKS, delivering stronger workload isolation for multi-tenant and regulated environments. Based on the open source Kata project, Pod Sandboxing introduces VM-level isolation for containerized workloads by running each pod inside its own lightweight virtual machine using Kata Containers, providing a stronger security boundary compared to traditional containers.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Connect with us at Ignite&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Meet the Azure Linux team and see these innovations in action:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Ignite&lt;/STRONG&gt;: Join us at our breakout session (&lt;A href="https://ignite.microsoft.com/en-US/sessions/BRK144" target="_blank" rel="noopener"&gt;https://ignite.microsoft.com/en-US/sessions/BRK144&lt;/A&gt;) and visit the &lt;STRONG&gt;Linux on Azure Booth&lt;/STRONG&gt; for live demos and deep dives.&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Session Type&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Session Code&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Session Name&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Date/Time (PST)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Breakout&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;BRK 143&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fignite.microsoft.com%2Fen-US%2Fsessions%2FBRK143%3Fsource%3Dsessions&amp;amp;data=05%7C02%7Cshreyabaheti%40microsoft.com%7C34d7ea8102854579367008de1c671fe8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638979427183411566%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=saRzyhq7e0rOJ2Ety4BzVVM4%2B0q24jjvdUq9ZAfVjkU%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Optimizing performance, deployments, and security for Linux on Azure&lt;/A&gt;&lt;U&gt; &lt;/U&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Thu, Nov 20/ 1:00 PM – 1:45 PM&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Breakout&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;BRK 144&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fignite.microsoft.com%2Fen-US%2Fsessions%2FBRK144%3Fsource%3Dsessions&amp;amp;data=05%7C02%7Cshreyabaheti%40microsoft.com%7C34d7ea8102854579367008de1c671fe8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638979427183428958%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=LtuLZ2dR2%2BUA%2BzY5LLym3cAzk7%2BYCWaHYC2KDQ0jFtU%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Build, modernize, and secure AKS workloads with Azure Linux&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Wed, Nov 19/ 1:30 PM – 2:15 PM&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Breakout&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;BRK 104&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fignite.microsoft.com%2Fen-US%2Fsessions%2FBRK104%3Fsource%3Dsessions&amp;amp;data=05%7C02%7Cshreyabaheti%40microsoft.com%7C34d7ea8102854579367008de1c671fe8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638979427183476964%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=yFTKT4PIsOPcI5GSlg4JNz1dzzaKVT50KNL%2BNl%2FnH6w%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;From VMs and containers to AI apps with Azure Red Hat OpenShift&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Thu, Nov 20/ 8:30 AM – 9:15 AM&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Theatre&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;TRH 712&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fignite.microsoft.com%2Fen-US%2Fsessions%2FTHR712%3Fsource%3Dsessions&amp;amp;data=05%7C02%7Cshreyabaheti%40microsoft.com%7C34d7ea8102854579367008de1c671fe8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638979427183496250%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=2Zlwh8t%2FBZGkI8jwyGGYf1co33Ht1ZciHOXoQSkM2H0%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Hybrid workload compliance from policy to practice on Azure&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Tue, Nov 18/ 3:15 PM – 3:45 PM&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Theatre&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;THR 701&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fignite.microsoft.com%2Fen-US%2Fsessions%2FTHR701%3Fsource%3Dsessions&amp;amp;data=05%7C02%7Cshreyabaheti%40microsoft.com%7C34d7ea8102854579367008de1c671fe8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638979427183516768%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=FoCbNZrb2TOFfKCenGTy9ua%2FaCyqV4j%2B7MFRlLk6zKA%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;From Container to Node: Building Minimal-CVE Solutions with Azure Linux&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Wed, Nov 19/ 3:30 PM – 4:00 PM&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Lab&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Lab 505&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fignite.microsoft.com%2Fen-US%2Fsessions%2FLAB505%3Fsource%3Dsessions&amp;amp;data=05%7C02%7Cshreyabaheti%40microsoft.com%7C34d7ea8102854579367008de1c671fe8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638979427183534572%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=W8GA0HDQT7QIS4fwCm%2FeUec3SU%2BaQmiCS1RKH%2BEVrpo%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Fast track your Linux and PostgreSQL migration with Azure Migrate&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Tue, Nov 18/ 4:30 PM – 5:45 PM PST&lt;/P&gt;
&lt;P&gt;Wed, Nov 19/ 3:45 PM – 5:00 PM PST&lt;/P&gt;
&lt;P&gt;Thu, Nov 20/ 9:00 AM – 10:15 AM PST&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Whether you’re migrating workloads, exploring security features, or looking to engage with our engineering team, we’re eager to connect and help you succeed with Azure Linux.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Resources to get started&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure Linux OS Guard Overview &amp;amp; QuickStart: &lt;/STRONG&gt;&lt;A href="https://aka.ms/osguard" target="_blank" rel="noopener"&gt;https://aka.ms/osguard&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Pod Sandboxing Overview &amp;amp; QuickStart: &lt;/STRONG&gt;&lt;A href="https://aka.ms/podsandboxing" target="_blank" rel="noopener"&gt;https://aka.ms/podsandboxing&lt;/A&gt;&lt;/LI&gt;
&lt;LI style="font-weight: bold;"&gt;&lt;STRONG&gt;Azure Linux Documentation: &lt;A href="https://learn.microsoft.com/en-us/azure/azure-linux/" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/azure/azure-linux/&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 18 Nov 2025 16:49:02 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/azure-linux-driving-security-in-the-era-of-ai-innovation/ba-p/4471034</guid>
      <dc:creator>Sudhanva</dc:creator>
      <dc:date>2025-11-18T16:49:02Z</dc:date>
    </item>
    <item>
      <title>From Policy to Practice: Built-In CIS Benchmarks on Azure - Flexible, Hybrid-Ready</title>
      <link>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/from-policy-to-practice-built-in-cis-benchmarks-on-azure/ba-p/4467884</link>
      <description>&lt;P&gt;Security is more important than ever. The industry-standard for secure machine configuration is the Center for Internet Security (CIS) Benchmarks. These benchmarks provide consensus-based prescriptive guidance to help organizations harden diverse systems, reduce risk, and&amp;nbsp;&lt;STRONG data-complete="true" data-processed="true"&gt;streamline compliance with major regulatory frameworks and industry standards like NIST, HIPAA, and PCI DSS&lt;/STRONG&gt;. In our &lt;A href="https://techcommunity.microsoft.com/blog/linuxandopensourceblog/from-compliance-to-auto-remediation-azures-latest-linux-security-innovations/4297553" target="_blank" rel="noopener"&gt;previous post&lt;/A&gt;, we outlined our plans to improve the Linux server compliance and hardening experience on Azure and shared a vision for integrating CIS Benchmarks. Today, that vision has turned into reality.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;We're now announcing the next phase of this work: Center for Internet Security (CIS) Benchmarks are now available on Azure for all&amp;nbsp;&lt;/STRONG&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/virtual-machines/linux/endorsed-distros" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Azure endorsed distros, &lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;at no additional cost to Azure and Azure Arc customers.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;With today's announcement, you get access to the CIS Benchmarks on Azure with full parity to what’s published by the Center for Internet Security (&lt;A href="https://www.cisecurity.org/cis-benchmarks/" target="_blank" rel="noopener"&gt;CIS&lt;/A&gt;). You can adjust parameters or define exceptions, tailoring security to your needs and applying consistent controls across cloud, hybrid, and on-premises environments - without having to implement every control manually. Thanks to this flexible architecture, you can truly manage compliance as code.&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table class="lia-border-style-hidden" border="0" style="width: 96.9444%; height: 296.375px; border-width: 0px;"&gt;&lt;colgroup&gt;&lt;col style="width: 50%" /&gt;&lt;col style="width: 50%" /&gt;&lt;/colgroup&gt;&lt;tbody&gt;&lt;tr style="height: 296.375px;"&gt;&lt;td style="height: 296.375px; border-width: 0px;"&gt;&lt;img /&gt;&lt;/td&gt;&lt;td style="height: 296.375px; border-width: 0px;"&gt;&lt;img /&gt;&amp;nbsp;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H2&gt;How we achieve parity&lt;/H2&gt;
&lt;P&gt;To ensure accuracy and trust, we rely on and ingest &lt;STRONG&gt;CIS machine-readable Benchmark content&lt;/STRONG&gt; (OVAL/XCCDF files) as the source of truth. This guarantees that the controls and rules you apply in Azure match the official CIS specifications, reducing drift and ensuring compliance confidence.&lt;/P&gt;
&lt;img /&gt;
&lt;H2&gt;What’s new under the hood&lt;/H2&gt;
&lt;P&gt;At the core of this update is &lt;A class="lia-external-url" href="https://github.com/Azure/azure-osconfig/tree/dev/src/modules/complianceengine" target="_blank" rel="noopener"&gt;azure-osconfig&lt;/A&gt;’s new compliance engine - a lightweight, open-source module developed by the Azure Core Linux team. It evaluates Linux systems directly against industry-standard benchmarks like CIS, supporting both audit and, in the future, auto-remediation. This enables accurate, scalable compliance checks across large Linux fleets. &lt;A class="lia-external-url" href="https://github.com/Azure/azure-osconfig/blob/dev/README.md" target="_blank" rel="noopener"&gt;Here&lt;/A&gt; you can read more about azure-osconfig.&lt;/P&gt;
&lt;H3&gt;Dynamic rule evaluation&lt;/H3&gt;
&lt;P&gt;The new compliance engine supports simple fact-checking operations, evaluation of logic operations on them (e.g., anyOf, allOf) and Lua based scripting, which allows to express complex checks required by the CIS Critical Security Controls - all evaluated natively without external scripts.&lt;/P&gt;
&lt;H3&gt;Scalable architecture for large fleets&lt;/H3&gt;
&lt;P&gt;When the assignment is created, the Azure control plane instructs the machine to pull the latest Policy package via the Machine Configuration agent. Azure-osconfig’s compliance engine is integrated as a light-weight library to the package and called by &lt;A href="https://learn.microsoft.com/en-us/azure/governance/machine-configuration/overview" target="_blank" rel="noopener"&gt;Machine Configuration&lt;/A&gt; agent for evaluation – which happens every 15-30minutes. This ensures near real-time compliance state without overwhelming resources and enables consistent evaluation across thousands of VMs and Azure Arc-enabled servers.&lt;/P&gt;
&lt;H3&gt;Future-ready for remediation and enforcement&lt;/H3&gt;
&lt;P&gt;While the Public Preview starts with audit-only mode, the roadmap includes per-rule remediation and enforcement using technologies like eBPF for kernel-level controls. This will allow proactive prevention of configuration drift and runtime hardening at scale. Please reach out if you interested in auto-remediation or enforcement.&lt;/P&gt;
&lt;H3&gt;Extensibility beyond CIS Benchmarks&lt;/H3&gt;
&lt;P&gt;The architecture was designed to support other security and compliance standards as well and isn’t limited to CIS Benchmarks. The compliance engine is modular, and we plan to extend the platform with STIG and other relevant industry benchmarks. This positions Azure as a platform for a place where you can manage your compliance from a single control-plane without duplicating efforts elsewhere.&lt;/P&gt;
&lt;H2&gt;Collaboration with the CIS&lt;/H2&gt;
&lt;P&gt;This milestone reflects a close collaboration between &lt;STRONG&gt;Microsoft and the CIS&lt;/STRONG&gt; to bring industry-standard security guidance into Azure as a built-in capability. Our shared goal is to make &lt;STRONG&gt;cloud-native compliance practical and consistent&lt;/STRONG&gt;, while giving customers the flexibility to meet their unique requirements. We are committed to continuously supporting new Benchmark releases, expanding coverage with new distributions and easing adoption through built-in workflows, such as moving from your current Benchmark version to a new version while preserving your custom configurations.&lt;/P&gt;
&lt;H3&gt;Certification and trust&lt;/H3&gt;
&lt;P&gt;We can proudly announce that azure-osconfig has met all the requirements and is&lt;A href="https://www.cisecurity.org/partner/microsoft" target="_blank" rel="noopener"&gt; &lt;STRONG&gt;officially certified&lt;/STRONG&gt;&lt;/A&gt; by the CIS for Benchmark assessment, so you can trust compliance results as authoritative. Minor benchmark updates will be applied automatically, while major version will be released separately. We will include workflows to help migrate customizations seamlessly across versions.&lt;/P&gt;
&lt;H1&gt;Key Highlights&lt;/H1&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://aka.ms/cisonazure" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Built-in CIS Benchmarks&lt;/STRONG&gt; for Azure Endorsed Linux distributions&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Full parity with official CIS Benchmarks content and&amp;nbsp;&lt;/STRONG&gt;&lt;A href="https://www.cisecurity.org/partner/microsoft" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;certified by the CIS for Benchmark Assessment&lt;/STRONG&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Flexible configuration&lt;/STRONG&gt;: adjust parameters, define exceptions, tune severity&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Hybrid support&lt;/STRONG&gt;: enforce the same baseline across Azure, on-prem, and multi-cloud with Azure Arc&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Reporting format in CIS tooling style&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H1&gt;Supported use cases&lt;/H1&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Certified CIS Benchmarks for all &lt;/STRONG&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/virtual-machines/linux/endorsed-distros" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Azure Endorsed Distros&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt; - Audit only (L1/L2 server profiles)&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Hybrid / On-premises and other cloud machines &lt;/STRONG&gt;with &lt;STRONG&gt;Azure Arc&lt;/STRONG&gt; for the supported distros&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Compliance as Code &lt;/STRONG&gt;(example via Github -&amp;gt; Azure OIDC auth and API integration)&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Compatible with &lt;/STRONG&gt;&lt;A href="https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Workbooks/GuestConfiguration%20Result" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;GuestConfig workbook&lt;/STRONG&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H1&gt;What’s next?&lt;/H1&gt;
&lt;P&gt;Our next mission is to bring the previously announced auto-remediation capability into this experience, expand the distribution coverage and elevate our workflows even further. We’re focused on empowering you to resolve issues while honoring the unique operational complexity of your environments. Stay tuned!&lt;/P&gt;
&lt;H1&gt;Get Started&lt;/H1&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://aka.ms/cisonazure" target="_blank" rel="noopener"&gt;Documentation link for this capability&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Enable CIS Benchmarks in &lt;A href="https://learn.microsoft.com/azure/governance/machine-configuration/overview," target="_blank" rel="noopener"&gt;Machine Configuration&lt;/A&gt; and select the “Official Center for Internet Security (CIS) Benchmarks for Linux Workloads” then select the distributions for your assignment, and customize as needed.&lt;/LI&gt;
&lt;LI&gt;In case if you want any additional distribution supported or have any feedback for azure-osconfig – please open an Azure support case or a &lt;A href="https://github.com/Azure/azure-osconfig/issues" target="_blank" rel="noopener"&gt;Github issue here&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Relevant Ignite 2025 session&lt;/STRONG&gt;:
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://ignite.microsoft.com/en-US/sessions/THR712" target="_blank" rel="noopener"&gt;Hybrid workload compliance from policy to practice on Azure&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Connect with us at Ignite &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Meet the Linux team and stop by the Linux on Azure booth to see these innovations in action:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Session Type&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Session Code&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Session Name&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Date/Time (PST)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Theatre&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;THR 712&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fignite.microsoft.com%2Fen-US%2Fsessions%2FTHR712%3Fsource%3Dsessions&amp;amp;data=05%7C02%7Cshreyabaheti%40microsoft.com%7C34d7ea8102854579367008de1c671fe8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638979427183496250%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=2Zlwh8t%2FBZGkI8jwyGGYf1co33Ht1ZciHOXoQSkM2H0%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Hybrid workload compliance from policy to practice on Azure&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Tue, Nov 18/ 3:15 PM – 3:45 PM&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Breakout&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;BRK 143&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fignite.microsoft.com%2Fen-US%2Fsessions%2FBRK143%3Fsource%3Dsessions&amp;amp;data=05%7C02%7Cshreyabaheti%40microsoft.com%7C34d7ea8102854579367008de1c671fe8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638979427183411566%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=saRzyhq7e0rOJ2Ety4BzVVM4%2B0q24jjvdUq9ZAfVjkU%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Optimizing performance, deployments, and security for Linux on Azure&lt;/A&gt;&lt;U&gt; &lt;/U&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Thu, Nov 20/ 1:00 PM – 1:45 PM&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Breakout&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;BRK 144&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fignite.microsoft.com%2Fen-US%2Fsessions%2FBRK144%3Fsource%3Dsessions&amp;amp;data=05%7C02%7Cshreyabaheti%40microsoft.com%7C34d7ea8102854579367008de1c671fe8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638979427183428958%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=LtuLZ2dR2%2BUA%2BzY5LLym3cAzk7%2BYCWaHYC2KDQ0jFtU%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Build, modernize, and secure AKS workloads with Azure Linux&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Wed, Nov 19/ 1:30 PM – 2:15 PM&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Breakout&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;BRK 104&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fignite.microsoft.com%2Fen-US%2Fsessions%2FBRK104%3Fsource%3Dsessions&amp;amp;data=05%7C02%7Cshreyabaheti%40microsoft.com%7C34d7ea8102854579367008de1c671fe8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638979427183476964%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=yFTKT4PIsOPcI5GSlg4JNz1dzzaKVT50KNL%2BNl%2FnH6w%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;From VMs and containers to AI apps with Azure Red Hat OpenShift&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Thu, Nov 20/ 8:30 AM – 9:15 AM&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Theatre&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;THR 701&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fignite.microsoft.com%2Fen-US%2Fsessions%2FTHR701%3Fsource%3Dsessions&amp;amp;data=05%7C02%7Cshreyabaheti%40microsoft.com%7C34d7ea8102854579367008de1c671fe8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638979427183516768%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=FoCbNZrb2TOFfKCenGTy9ua%2FaCyqV4j%2B7MFRlLk6zKA%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;From Container to Node: Building Minimal-CVE Solutions with Azure Linux&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Wed, Nov 19/ 3:30 PM – 4:00 PM&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Lab&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Lab 505&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fignite.microsoft.com%2Fen-US%2Fsessions%2FLAB505%3Fsource%3Dsessions&amp;amp;data=05%7C02%7Cshreyabaheti%40microsoft.com%7C34d7ea8102854579367008de1c671fe8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638979427183534572%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=W8GA0HDQT7QIS4fwCm%2FeUec3SU%2BaQmiCS1RKH%2BEVrpo%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Fast track your Linux and PostgreSQL migration with Azure Migrate&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Tue, Nov 18/ 4:30 PM – 5:45 PM PST&lt;/P&gt;
&lt;P&gt;Wed, Nov 19/ 3:45 PM – 5:00 PM PST&lt;/P&gt;
&lt;P&gt;Thu, Nov 20/ 9:00 AM – 10:15 AM PST&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 18 Nov 2025 08:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/from-policy-to-practice-built-in-cis-benchmarks-on-azure/ba-p/4467884</guid>
      <dc:creator>pallakatos</dc:creator>
      <dc:date>2025-11-18T08:00:00Z</dc:date>
    </item>
    <item>
      <title>Innovations and Strengthening Platforms Reliability Through Open Source</title>
      <link>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/innovations-and-strengthening-platforms-reliability-through-open/ba-p/4468172</link>
      <description>&lt;P&gt;The Linux Systems Group (LSG) at Microsoft is the team building OS innovations in Azure enabling secure and high-performance platforms that power millions of workloads worldwide. From providing the OS for &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/azure-boost/overview" target="_blank" rel="noopener"&gt;Boost&lt;/A&gt;, optimizing Linux kernels for hyperscale environments or contributing to open-source projects like Rust-VMM and Cloud Hypervisor, LSG ensures customers get the best of Linux on Azure. Our work spans performance tuning, security hardening, and feature enablement for new silicon enablement and cutting-edge technologies, such as Confidential Computing, ARM64 and Nvidia Grace Blackwell all while strengthening the global open-source ecosystem. Our philosophy is simple: we develop in the open and upstream first, integrating improvements into our products after they’ve been accepted by the community.&lt;/P&gt;
&lt;P&gt;At Ignite we like to highlight a few open-source key contributions in 2025 that are the foundations for many product offerings and innovations you will see during the whole week. We helped bring seamless kernel update features (Kexec HandOver) to the Linux kernel, improved networking paths for AI platforms, strengthened container orchestration and security efforts, and shared engineering insights with global communities and conferences. This work reflects Microsoft’s long-standing commitment to open source, grounded in active upstream participation and close collaboration with partners across the ecosystem. Our engineers work side-by-side with maintainers, Linux distro partners, and silicon providers to ensure contributions land where they help the most, from kernel updates to improvements that support new silicon platforms.&lt;/P&gt;
&lt;H3&gt;&lt;SPAN class="lia-text-color-10"&gt;Linux Kernel Contributions&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;STRONG&gt;Enabling Seamless Kernel Updates:&lt;/STRONG&gt; Persistent uptime for critical services is a top priority. This year, Microsoft engineer Mike Rapoport successfully merged Kexec HandOver (KHO) into Linux 6.16 &lt;A href="https://lwn.net/Articles/1033364/" target="_blank" rel="noopener"&gt;&lt;SUP&gt;1&lt;/SUP&gt;&lt;/A&gt;&lt;SUP&gt; &lt;/SUP&gt;. KHO is a kernel mechanism that preserves memory state across a reboot (kexec), allowing systems to carry over important data when loading a new kernel. In practice, this means Microsoft can apply security patches or kernel updates to Azure platform and customers VMs without rebooting or with significantly reduced downtime. It’s a technical achievement with real impact: cloud providers and enterprises can update Linux on the fly, enhancing security and reliability for services that demand continuous availability.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Optimizing Network Drivers for AI Scale: &lt;/STRONG&gt;Massive AI models require massive bandwidth. Working closely with our partners deploying large AI workloads on Azure, LSG engineers delivered a breakthrough in Linux networking performance. LSG team rearchitected the receive path of the MANA network driver (used by our smart NICs) to eliminate wasted memory and enable recycling of buffers.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;2x higher&lt;/STRONG&gt; effective network throughput on 64 KB page systems&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;35% better&amp;nbsp;&lt;/STRONG&gt;memory efficiency for RX buffers&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;15% higher&lt;/STRONG&gt; throughput and roughly half the memory use even on standard x86_64 VMs&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-18"&gt;&lt;STRONG&gt;References &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;PRE&gt;MANA RX optimization patch: &lt;A href="https://lkml.org/lkml/2025/7/30/856?" target="_blank" rel="noopener"&gt;net: mana: Use page pool fragments for RX buffers&lt;/A&gt; &lt;A href="https://lkml.org/lkml/2025/7/30/856?" target="_blank" rel="noopener"&gt;LKML&lt;/A&gt;&lt;/PRE&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;PRE&gt;Linux Plumbers 2025 talk: &lt;A href="https://lpc.events/event/19/contributions/2276/?" target="_blank" rel="noopener"&gt;Optimizing traffic receive (RX) path in Linux kernel MANA Driver for larger PAGE_SIZE systems&lt;/A&gt;&lt;/PRE&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Improving Reliability for Cloud Networking:&lt;/STRONG&gt; In addition to raw performance, reliability got a boost. One critical fix addressed a race condition in the Hyper-V hv_netvsc driver that sometimes caused packet loss when a VM’s network channel initialized. By patching this upstream, we improved network stability for all Linux guests running on Hyper-V keeping customer VMs running smoothly during dynamic operations like scale-out or live migrations. Our engineers also upstreamed numerous improvements to Hyper-V device drivers (covering storage, memory, and general virtualization).We fixed interrupt handling bugs, eliminated outdated patches, and resolved issues affecting ARM64 architectures. Each of these fixes was contributed to the mainline kernel, ensuring that any Linux distribution running on Hyper-V or Azure benefits from the enhanced stability and performance.&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-18"&gt;&lt;STRONG&gt;References &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Upstream fix: hv_netvsc race on early receive events: kernel.org commit referenced by Ubuntu bug &lt;A href="https://bugs.launchpad.net/bugs/2127705" target="_blank" rel="noopener"&gt;Launchpad&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Ubuntu Azure backport write-up: Bug 2127705 – hv_netvsc: fix loss of early receive events from host during channel open &lt;A href="https://bugs.launchpad.net/bugs/2127705" target="_blank" rel="noopener"&gt;Launchpad&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Older background on hv_netvsc packet-loss issues: &lt;A href="https://bugzilla.kernel.org/show_bug.cgi?id=81061" target="_blank" rel="noopener"&gt;kernel.org bug 81061&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Strengthening Core Linux Infrastructure:&lt;/STRONG&gt; Several of our contributions targeted fundamental kernel subsystems that all Linux users rely on. For example, we &amp;nbsp;led significant enhancements to the Virtual File System (VFS) layer &amp;nbsp;reworking how Linux handles process core dumps and expanding file management capabilities. These changes improve how Linux handles files and memory under the hood, benefiting scenarios from large-scale cloud storage to local development. We also continued upstream efforts to support advanced virtualization features.Our team is actively upstreaming the mshv_vtl driver (for managing secure partitions on Hyper-V) and improving Linux’s compatibility with nested virtualization on Azure’s Microsoft Hypervisor (MSHV). All this low-level work adds up to a more robust and feature-rich kernel for everyone.&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-18"&gt;&lt;STRONG&gt;References &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;PRE&gt;Example VFS coredump work: &lt;A href="https://cgit.freedesktop.org/drm-tip/commit/fs/coredump.c?id=7bbb05dbea38e56d9f6ac7ac1040f93b0808cbce" target="_blank" rel="noopener"&gt;split file coredumping into coredump_file()&lt;/A&gt;&lt;/PRE&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;PRE&gt;mshv_vtl driver patchset: &lt;A href="https://lwn.net/Articles/1044010/" target="_blank" rel="noopener"&gt;Drivers: hv: Introduce new driver – mshv_vtl (v10)&lt;/A&gt; and &lt;A href="https://patchew.org/linux/20251113044149.3710877-1-namjain%40linux.microsoft.com/" target="_blank" rel="noopener"&gt;v12 patch series on patchew&lt;/A&gt;&lt;/PRE&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;STRONG&gt;Bolstering Linux Security in the Cloud:&lt;/STRONG&gt; Security has been a major thread across our upstream contributions. One focus area is making container workloads easier to verify and control. Microsoft engineers proposed an approach for code integrity in containers built on containerd’s EROFS snapshotter, shared as an open RFC in the containerd project -&lt;A href="https://github.com/containerd/containerd/issues/12081" target="_blank" rel="noopener"&gt;GitHub&lt;/A&gt;. The idea is to use read-only images plus integrity metadata so that container file systems can be measured and checked against policy before they run.&lt;/P&gt;
&lt;P&gt;We also engaged deeply with industry partners on kernel vulnerability handling. Through the Cloud-LTS Linux CVE workgroup, cloud providers and vendors collaborate in the open on a shared analysis of Linux CVEs. The group maintains a public repository that records how each CVE affects various kernels and configurations, which helps reduce duplicated triage work and speeds up security responses.&lt;/P&gt;
&lt;P&gt;On the platform side, our engineers contributed fixes to the OP-TEE secure OS used in trusted execution and secure-boot scenarios, making sure that the cryptographic primitives required by Azure’s Linux boot flows behave correctly across supported devices. These changes help ensure that Linux verified boot chains remain reliable on Azure hardware.&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-18"&gt;&lt;STRONG&gt;References &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;PRE&gt;containerd RFC: &lt;A href="https://github.com/containerd/containerd/issues/12081" target="_blank" rel="noopener"&gt;Code Integrity for OCI/containerd Containers using erofs-snapshotter&lt;/A&gt; &lt;A href="https://github.com/containerd/containerd/issues/12081" target="_blank" rel="noopener"&gt;GitHub&lt;/A&gt;&lt;/PRE&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;PRE&gt;Cloud-LTS public CVE analysis repo: &lt;A href="https://github.com/cloud-lts/linux-cve-analysis" target="_blank" rel="noopener"&gt;cloud-lts/linux-cve-analysis&lt;/A&gt;&lt;/PRE&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;PRE&gt;Linux CVE workgroup session at Linux Plumbers 2025: &lt;A href="https://lpc.events/event/19/contributions/2102/" target="_blank" rel="noopener"&gt;Linux CVE workgroup&lt;/A&gt;&lt;/PRE&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;PRE&gt;OP-TEE project docs: &lt;A href="https://optee.readthedocs.io/" target="_blank" rel="noopener"&gt;OP-TEE documentation&lt;/A&gt;&lt;/PRE&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&lt;SPAN class="lia-text-color-10"&gt;Developer Tools &amp;amp; Experience&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;STRONG&gt;Smoother OS Management with Systemd:&lt;/STRONG&gt; Ensuring Linux works seamlessly on Azure scale. The core init system systemd saw important improvements from our team this year. LSG contributed and merged upstream support for disk quota controls in systemd services. With new directives (like StateDirectoryQuota and CacheDirectoryQuota), administrators can easily enforce storage limits for service data, which is especially useful in scenarios like IoT devices with eMMC storage on Azure’s custom SoCs. In addition, Sea-Team added an auto-reload feature to systemd-journald, allowing log configuration changes to apply at runtime without restarting the logging service . These improvements, now part of upstream systemd, help Azure and other Linux environments perform updates or maintenance with minimal disruption to running services. These improvements help Azure and other environments roll out configuration updates with less impact on running workloads.&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-18"&gt;&lt;STRONG&gt;References&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;systemd quota directives: &lt;A href="https://www.freedesktop.org/software/systemd/man/systemd.exec.html" target="_blank" rel="noopener"&gt;systemd.exec(5) – StateDirectoryQuota and related options&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;systemd journald reload behavior: &lt;A href="https://man.archlinux.org/man/systemd-journald.service.8.en" target="_blank" rel="noopener"&gt;systemd-journald.service(8)&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;STRONG&gt;Empowering Linux Quality at Scale:&lt;/STRONG&gt; Running Linux on Azure at global scale requires extensive, repeatable testing. Microsoft continues to invest in LISA (Linux Integration Services Automation), an open-source framework that validates Linux kernels and distributions on Azure and other Hyper-V–based environments.&lt;/P&gt;
&lt;P&gt;Over the past year we expanded LISA with:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;New stress tests for rapid reboot sequences to catch elusive timing bugs&lt;/LI&gt;
&lt;LI&gt;Better failure diagnostics to make complex issues easier to root-cause&lt;/LI&gt;
&lt;LI&gt;Extended coverage for ARM64 scenarios and technologies like InfiniBand networking&lt;/LI&gt;
&lt;LI&gt;Integration of Azure VM SKU metadata and policy checks so that image validation can automatically confirm conformance to Azure requirements&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;These changes help us qualify new kernels, distributions, and VM SKUs before they are shipped to customers. Because LISA is open source, partners and Linux vendors can run the same tests and share results, which raises quality across the ecosystem.&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-18"&gt;&lt;STRONG&gt;References &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;PRE&gt;LISA GitHub repo: &lt;A href="https://github.com/microsoft/lisa" target="_blank" rel="noopener"&gt;microsoft/lisa&lt;/A&gt;&lt;/PRE&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;PRE&gt;LISA documentation: &lt;A href="https://mslisa.readthedocs.io/?" target="_blank" rel="noopener"&gt;Welcome to Linux Integration Services Automation&lt;/A&gt; &lt;A href="https://mslisa.readthedocs.io/" target="_blank" rel="noopener"&gt;LISA Documentation&lt;/A&gt;&lt;/PRE&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&lt;SPAN class="lia-text-color-10"&gt;Community Engagement and Leadership&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;STRONG&gt;Sharing Knowledge Globally&lt;/STRONG&gt;: Open-source contribution is not just about code - it’s about people and knowledge exchange. Our team members took active roles in community events worldwide, reflecting Microsoft’s growing leadership in the Linux community. We were proud to be a Platinum Sponsor of the inaugural Open Source Summit India 2025 in Hyderabad, where LSG engineers served on the program committee and hosted technical sessions. At Linux Security Summit Europe 2025, Microsoft’s security experts shaped the agenda as program committee members, delivered talks (such as “The State of SELinux”), and even led panel discussions alongside colleagues from Intel, Arm, and others. And in Paris at Kernel Recipes 2025, our own SMEs shared kernel insights with fellow developers. By engaging in these events, Microsoft not only contributes code but also helps guide the conversation on the future of Linux. These relationships and public interactions build mutual trust and ensure that we remain closely aligned with community priorities.&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-18"&gt;&lt;STRONG&gt;References&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;PRE&gt;Event: &lt;A href="https://events.linuxfoundation.org/open-source-summit-india/" target="_blank" rel="noopener"&gt;Open Source Summit India 2025 – Linux Foundation&lt;/A&gt; &amp;nbsp;&lt;/PRE&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;PRE&gt;Paul Moore’s talk archive: &lt;A href="https://www.paul-moore.com/presentations" target="_blank" rel="noopener"&gt;&amp;nbsp;LSS-EU 2025&lt;/A&gt;&lt;/PRE&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;PRE&gt;Conference: &lt;A href="https://kernel-recipes.org/en/2025/" target="_blank" rel="noopener"&gt;Kernel Recipes 2025&lt;/A&gt; and &lt;A href="https://kernel-recipes.org/en/2025/schedule" target="_blank" rel="noopener"&gt;Kernel Recipes 2025 schedule&lt;/A&gt;&lt;/PRE&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&lt;SPAN class="lia-text-color-10"&gt;Closing Thoughts&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;Microsoft’s long-term commitment to open source remains strong, and the Linux Systems Group will continue contributing upstream, collaborating across the industry, and supporting the upstream communities that shape the technologies we rely on. Our work begins in upstream projects such as the Linux kernel, Kubernetes, and systemd, where improvements are shared openly before they reach Azure. The progress highlighted in this blog was made possible by the wider Linux community whose feedback, reviews, and shared ideas help refine every contribution. As we move ahead, we welcome maintainers, developers, and enterprise teams to engage with our projects, offer input, and collaborate with us. We will continue contributing code, sharing knowledge, and supporting the open-source technologies that power modern computing, working with the community to strengthen the foundation and shape a future that benefits everyone.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;References &amp;amp; Resources&lt;/STRONG&gt;:&lt;/P&gt;
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/blog/microsofts-open-source-journey-from-20000-lines-of-linux-code-to-ai-at-global-scale/" target="_blank" rel="noopener"&gt;Microsoft’s Open-Source Journey – Azure Blog&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/linux-and-open-source-on-azure-quarterly-update-february-2025/ba-p/4382722" target="_blank" rel="noopener"&gt;https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/linux-and-open-source-on-azure-quarterly-update-february-2025/ba-p/4382722&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://github.com/cloud-hypervisor/cloud-hypervisor" target="_blank" rel="noopener"&gt;Cloud Hypervisor Project&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://github.com/rust-vmm/community" target="_blank" rel="noopener"&gt;Rust-VMM Community&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://github.com/microsoft/lisa" target="_blank" rel="noopener"&gt;Microsoft LISA (Linux Integration Services Automation) Repository&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://github.com/cloud-lts/linux-cve-analysis" target="_blank" rel="noopener"&gt;Cloud-LTS Linux CVE Analysis Project&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 18 Nov 2025 06:42:34 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/innovations-and-strengthening-platforms-reliability-through-open/ba-p/4468172</guid>
      <dc:creator>KashanK</dc:creator>
      <dc:date>2025-11-18T06:42:34Z</dc:date>
    </item>
    <item>
      <title>Linux on Azure at Microsoft Ignite 2025: What’s New, What to Attend, and Where to Find Us</title>
      <link>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/linux-on-azure-at-microsoft-ignite-2025-what-s-new-what-to/ba-p/4470685</link>
      <description>&lt;P&gt;Microsoft Ignite 2025 is almost here, and we’re heading back to San Francisco from November 17-21 with a full digital experience for those joining online. Every year, Ignite brings together IT pros, developers, security teams, and technology leaders from around the world to explore the future of cloud, AI, and infrastructure. This year, Linux takes center stage in a big way.&lt;/P&gt;
&lt;P&gt;From new security innovations in Azure Linux to deeper AKS modernization capabilities and hands-on learning opportunities, Ignite 2025 is packed with content for anyone building, running, or securing Linux-based workloads in Azure. Below is your quick guide to the biggest Linux announcements and the must-see sessions.&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;Major Linux Announcements at Ignite 2025&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;&lt;STRONG&gt;Public Preview: Built-in CIS Benchmarks for Azure Endorsed Linux Distributions&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;CIS Benchmarks are now integrated directly into Azure Machine Configuration, giving you automated and customizable compliance monitoring across Azure, hybrid, and on-prem environments. This makes it easier to continuously govern your Linux estate at scale with no external tooling required.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Public Preview: Azure Linux OS Guard&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Azure Linux OS Guard introduces a hardened, immutable Linux container host for AKS with FIPS mode enforced by default, a reduced attack surface, and tight AKS integration. It is ideal for highly regulated or sensitive workloads and brings stronger default security with less operational complexity.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;General Availability: Pod Sandboxing for AKS (Kata Containers)&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Pod Sandboxing with fully managed Kata Containers is now GA, delivering VM-level isolation for AKS workloads. This provides stronger separation of CPU, memory, and networking and is well-suited for multi-tenant applications or organizations with strict compliance boundaries.&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;Linux Sessions at Ignite&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;Whether you are optimizing performance, modernizing with containers, or exploring new security scenarios, there is something for every Linux practitioner.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Breakout Sessions&lt;/STRONG&gt;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table class="lia-border-style-solid" border="1" style="width: 79.1667%; border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Session Code&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Session Name&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Date and Time (PST)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;BRK143&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;U&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fignite.microsoft.com%2Fen-US%2Fsessions%2FBRK143%3Fsource%3Dsessions&amp;amp;data=05%7C02%7Cshreyabaheti%40microsoft.com%7C34d7ea8102854579367008de1c671fe8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638979427183411566%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=saRzyhq7e0rOJ2Ety4BzVVM4%2B0q24jjvdUq9ZAfVjkU%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Optimizing performance, deployments, and security for Linux on Azure&lt;/A&gt;&lt;/U&gt;&lt;U&gt;&amp;nbsp;&lt;/U&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Thu Nov 20, 1:00 PM to 1:45 PM&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;BRK144&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;U&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fignite.microsoft.com%2Fen-US%2Fsessions%2FBRK144%3Fsource%3Dsessions&amp;amp;data=05%7C02%7Cshreyabaheti%40microsoft.com%7C34d7ea8102854579367008de1c671fe8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638979427183428958%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=LtuLZ2dR2%2BUA%2BzY5LLym3cAzk7%2BYCWaHYC2KDQ0jFtU%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Build, modernize, and secure AKS workloads with Azure Linux&lt;/A&gt;&lt;/U&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Wed Nov 19, 1:30 PM to 2:15 PM&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;BRK104&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;U&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fignite.microsoft.com%2Fen-US%2Fsessions%2FBRK104%3Fsource%3Dsessions&amp;amp;data=05%7C02%7Cshreyabaheti%40microsoft.com%7C34d7ea8102854579367008de1c671fe8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638979427183476964%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=yFTKT4PIsOPcI5GSlg4JNz1dzzaKVT50KNL%2BNl%2FnH6w%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;From VMs and containers to AI apps with Azure Red Hat OpenShift&lt;/A&gt;&lt;/U&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Thu Nov 20, 8:30 AM to 9:15 AM&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;BRK137&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;A href="https://ignite.microsoft.com/en-US/sessions/BRK137?source=sessions" target="_blank" rel="noopener"&gt;Nasdaq &lt;/A&gt;&lt;A href="https://ignite.microsoft.com/en-US/sessions/BRK137?source=sessions" target="_blank" rel="noopener"&gt;Boardvantage&lt;/A&gt;&lt;A href="https://ignite.microsoft.com/en-US/sessions/BRK137?source=sessions" target="_blank" rel="noopener"&gt;: AI-driven governance on PostgreSQL and AI Foundry&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Wed Nov 19, 11:30 AM to 12:15 PM&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&lt;STRONG&gt;Theatre Sessions&lt;/STRONG&gt;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table class="lia-border-style-solid" border="1" style="width: 79.2593%; border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Session Code&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Session Name&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Date and Time (PST)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;THR712&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;U&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fignite.microsoft.com%2Fen-US%2Fsessions%2FTHR712%3Fsource%3Dsessions&amp;amp;data=05%7C02%7Cshreyabaheti%40microsoft.com%7C34d7ea8102854579367008de1c671fe8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638979427183496250%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=2Zlwh8t%2FBZGkI8jwyGGYf1co33Ht1ZciHOXoQSkM2H0%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Hybrid workload compliance from policy to practice on Azure&lt;/A&gt;&lt;/U&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Tue Nov 18, 3:15 PM to 3:45 PM&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;THR701&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;U&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fignite.microsoft.com%2Fen-US%2Fsessions%2FTHR701%3Fsource%3Dsessions&amp;amp;data=05%7C02%7Cshreyabaheti%40microsoft.com%7C34d7ea8102854579367008de1c671fe8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638979427183516768%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=FoCbNZrb2TOFfKCenGTy9ua%2FaCyqV4j%2B7MFRlLk6zKA%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;From Container to Node: Building Minimal-CVE Solutions with Azure Linux&lt;/A&gt;&lt;/U&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Wed Nov 19, 3:30 PM to 4:00 PM&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&lt;STRONG&gt;Hands-on Lab&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Lab 505:&lt;/STRONG&gt; &lt;U&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fignite.microsoft.com%2Fen-US%2Fsessions%2FLAB505%3Fsource%3Dsessions&amp;amp;data=05%7C02%7Cshreyabaheti%40microsoft.com%7C34d7ea8102854579367008de1c671fe8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638979427183534572%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;amp;sdata=W8GA0HDQT7QIS4fwCm%2FeUec3SU%2BaQmiCS1RKH%2BEVrpo%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Fast track your Linux and PostgreSQL migration with Azure Migrate&lt;/A&gt;&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;Tue Nov 18, 4:30 PM to 5:45 PM&lt;BR /&gt;Wed Nov 19, 3:45 PM to 5:00 PM&lt;BR /&gt;Thu Nov 20, 9:00 AM to 10:15 AM&lt;/P&gt;
&lt;P&gt;This interactive lab helps you assess, plan, and execute Linux and PostgreSQL migrations at scale using Azure Migrate’s end-to-end tooling.&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;Meet the Linux on Azure Team at Ignite&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;If you are attending in person, come say hello. Visit the Linux on Azure Expert Meetup stations inside the Microsoft Hub. You can ask questions directly to Microsoft’s Linux engineering and product experts, explore demos across Azure Linux, compliance, and migration, and get recommendations tailored to your workloads. We always love meeting customers and partners.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 17 Nov 2025 18:55:08 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/linux-on-azure-at-microsoft-ignite-2025-what-s-new-what-to/ba-p/4470685</guid>
      <dc:creator>shreyabaheti</dc:creator>
      <dc:date>2025-11-17T18:55:08Z</dc:date>
    </item>
    <item>
      <title>Dalec: Declarative Package and Container Builds</title>
      <link>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/dalec-declarative-package-and-container-builds/ba-p/4465290</link>
      <description>&lt;P&gt;&lt;A href="https://github.com/project-dalec/dalec" target="_blank" rel="noopener"&gt;Dalec&lt;/A&gt;, a Cloud Native Computing Foundation (CNCF) Sandbox project, is a Docker BuildKit frontend that enables users to build system packages and container images from declarative YAML specifications. As a BuildKit frontend, Dalec integrates directly into the Docker build process, requiring no additional tools beyond Docker itself.&lt;/P&gt;
&lt;P&gt;Dalec’s primary focus is building native Linux packages (RPM and DEB formats) from source, with optional container image creation from those packages. It supports RPM-based distributions such as Azure Linux, AlmaLinux, and Rocky Linux, as well as DEB-based distributions like Debian and Ubuntu. It also supports pluggable backends for additional operating systems.&lt;/P&gt;
&lt;P&gt;By replacing complex spec files and multi-stage Dockerfiles with a single declarative configuration, Dalec simplifies package building while providing built-in support for SBOMs, provenance attestations, and package signing.&lt;/P&gt;
&lt;H2&gt;Why Dalec?&lt;/H2&gt;
&lt;P&gt;Building packages traditionally requires:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Manual creation of distribution-specific spec files (.spec for RPM, debian/ directory for DEB)&lt;/LI&gt;
&lt;LI&gt;Multi-stage Dockerfiles with complex build logic&lt;/LI&gt;
&lt;LI&gt;Architecture-specific build configurations&lt;/LI&gt;
&lt;LI&gt;Separate build processes for each target distribution&lt;/LI&gt;
&lt;LI&gt;Manual dependency management and bootstrapping&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Dalec replaces this complexity with a single declarative YAML specification that defines:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Sources&lt;/STRONG&gt; - Git repositories, HTTP archives, or inline content&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Dependencies&lt;/STRONG&gt; - Build-time and runtime package dependencies&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Build steps&lt;/STRONG&gt; - Commands to compile and prepare artifacts&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Artifacts&lt;/STRONG&gt; - Binaries, configuration files, and systemd units to package&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Targets&lt;/STRONG&gt; - Distribution-specific customizations&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Tests&lt;/STRONG&gt; - Validation of the built package&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Image config&lt;/STRONG&gt; - Optional container image creation from the package&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The result is a portable, and auditable build process.&lt;/P&gt;
&lt;H2&gt;Key Benefits&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;🐳 Zero Installation&lt;/STRONG&gt; - Works with standard Docker. No specialized tooling, package managers, or daemon processes required. If you can run docker build, you can use Dalec.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;🚀 Fast and Cacheable&lt;/STRONG&gt; - BuildKit’s intelligent caching means rebuilds are lightning fast. Change one source file, rebuild only what’s affected.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;🤝 Integration&lt;/STRONG&gt; – Dalec integrates at both the package manager level and with language toolchains such as Go and Rust. This allows Dalec to automatically manage caches for items such as Go modules, incremental compiler caches, and packages across builds.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;🔒 Secure by Default&lt;/STRONG&gt; - Automatic SBOM generation, provenance attestations, and package signing built into the workflow. Supply chain security without extra steps.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;📦 Package and Container&lt;/STRONG&gt; - Build once, output both. Get installable RPM/DEB packages for traditional deployments AND minimal container images for Kubernetes from the same specification.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;🌍 Multi-Distribution&lt;/STRONG&gt; - One spec, multiple targets. Build for Ubuntu, Debian, Azure Linux, Rocky Linux, and AlmaLinux without maintaining separate build configurations.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;✍️Better Composition&lt;/STRONG&gt; - Declarative configurations compose cleanly. You can override specific parts for different distributions or targets without rewriting the entire build logic, and focusing on the package level makes it more natural to break components down into more composable pieces.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;🔍 Fully Auditable&lt;/STRONG&gt; - Declarative configuration means no hidden build steps. Every artifact’s provenance is traceable, meeting compliance and security requirements.&lt;/P&gt;
&lt;H2&gt;Who Should Use Dalec?&lt;/H2&gt;
&lt;P&gt;Dalec makes it easy for different audiences to achieve their goals:&lt;/P&gt;
&lt;H4&gt;For Application Developers&lt;/H4&gt;
&lt;P&gt;Convert your source code into distributable packages without learning RPM spec files or Debian packaging conventions. Focus on your application, not build infrastructure.&lt;/P&gt;
&lt;H4&gt;For Platform Operators&lt;/H4&gt;
&lt;P&gt;Maintain a consistent build process across your organization. Centralize packaging expertise in reusable specifications instead of scattered Dockerfiles and build scripts. Enforce security and compliance requirements at build time.&lt;/P&gt;
&lt;H4&gt;For Package Maintainers&lt;/H4&gt;
&lt;P&gt;Build packages for multiple distributions from a single source of truth. Reduce maintenance burden and ensure consistency across your supported platforms.&lt;/P&gt;
&lt;H2&gt;How Dalec Works&lt;/H2&gt;
&lt;P&gt;Dalec is implemented as a &lt;A href="https://docs.docker.com/build/buildkit/frontend/" target="_blank" rel="noopener"&gt;BuildKit frontend&lt;/A&gt;. When you specify # syntax=ghcr.io/project-dalec/dalec/frontend:latest at the top of your spec file, Docker BuildKit automatically pulls and executes the Dalec frontend, which translates your YAML specification into low-level build instructions (LLB graphs).&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;The Build Flow:&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Source acquisition&lt;/STRONG&gt; - Clone git repositories, download archives, generate dependency manifests (like Go modules, npm packages, or Python wheels)&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Package build&lt;/STRONG&gt; - Execute build steps in isolated environments with dependencies installed&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Package creation&lt;/STRONG&gt; - Create RPM or DEB packages with artifacts and metadata&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Package testing&lt;/STRONG&gt; - Install and validate the package in a clean environment&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Container creation&lt;/STRONG&gt; (optional) - Install packages into a minimal base image&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;From a single YAML file, you can generate multi-architecture packages that work across distributions, then optionally compose them into minimal container images—all without writing a single line of Dockerfile or distribution-specific spec.&lt;/P&gt;
&lt;H2&gt;Real-World Use Cases&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;Open Source Projects&lt;/STRONG&gt; - Distribute your software across multiple Linux distributions without maintaining separate packaging workflows. A single Dalec spec replaces RPM spec files, debian/ directories, and custom build scripts.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Enterprise Deployments&lt;/STRONG&gt; - Build compliant, auditable packages for both traditional VM-based deployments and modern Kubernetes clusters. The same build produces installable packages for your data center and container images for your cloud infrastructure.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;CI/CD Pipelines&lt;/STRONG&gt; - Integrate seamlessly into GitHub Actions, GitLab CI, or any CI system that supports Docker. No special runners or build agents required, just standard Docker.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Security-Critical Applications&lt;/STRONG&gt; - Leverage BuildKit's built-in SBOM and provenance generation to meet supply chain security requirements. Every build produces attestations that prove what went into your packages and containers.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Multi-Architecture Builds&lt;/STRONG&gt; - Build for x86_64, ARM64, and other architectures with a single command. BuildKit handles the complexity of cross-compilation automatically.&lt;/P&gt;
&lt;H2&gt;A Simple Example&lt;/H2&gt;
&lt;P&gt;Here’s what a Dalec spec looks like. This example builds go-md2man, a tool that converts Markdown to man pages:&lt;/P&gt;
&lt;LI-CODE lang="yaml"&gt;# syntax=ghcr.io/project-dalec/dalec/frontend:latest
name: go-md2man
version: 2.0.3
revision: "1"
packager: Dalec Example
vendor: Dalec Example
license: MIT
description: A tool to convert markdown into man pages (roff).
website: https://github.com/cpuguy83/go-md2man

sources:
  src:
    generate:
      - gomod: {}  # Pre-downloads Go modules (network disabled during build)
    git:
      url: https://github.com/cpuguy83/go-md2man.git
      commit: v2.0.3

dependencies:
  build:
    golang:

build:
  env:
    CGO_ENABLED: "0"
  steps:
    - command: |
        cd src
        go build -o go-md2man .

artifacts:
  binaries:
    src/go-md2man:

image:
  entrypoint: go-md2man
  cmd: --help

tests:
  - name: Check bin
    files:
      /usr/bin/go-md2man:
        permissions: 0755&lt;/LI-CODE&gt;
&lt;P&gt;From this single YAML file, you can build RPM packages, DEB packages, and container images for multiple distributions and architectures using standard docker build commands.&lt;/P&gt;
&lt;P&gt;Dalec can also create minimal container images with just runtime dependencies, no source code building required. This is perfect for creating lightweight containers with only the packages you need.&lt;/P&gt;
&lt;P&gt;Here’s a minimal example that creates a container with curl and bash:&lt;/P&gt;
&lt;LI-CODE lang="yaml"&gt;# syntax=ghcr.io/project-dalec/dalec/frontend:latest name: my-minimal-image version: 0.1.0 revision: "1" license: MIT description: A minimal image with only curl and shell access dependencies: runtime: curl: bash: image: entrypoint: /bin/bash&lt;/LI-CODE&gt;
&lt;P&gt;Build it with:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;docker build -f my-minimal-image.yml --target=azlinux3 -t my-minimal-image:0.1.0 .&lt;/LI-CODE&gt;
&lt;P&gt;This produces a minimal image built from scratch containing only curl, bash, and their dependencies.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Pro tip:&lt;/STRONG&gt; You can skip creating a spec file entirely by passing dependencies on the command line:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;docker build -t my-minimal-image:0.1.0 --build-arg BUILDKIT_SYNTAX=ghcr.io/project-dalec/dalec/frontend:latest --target=azlinux3/container/depsonly - &amp;lt;&amp;lt;&amp;lt;"$(jq -c '.dependencies.runtime = {"curl": {}, "bash": {}} | .image.entrypoint = "/bin/bash"' &amp;lt;&amp;lt;&amp;lt;"{}")"&lt;/LI-CODE&gt;
&lt;P&gt;Learn more in the &lt;A href="https://project-dalec.github.io/dalec/container-only-builds" target="_blank" rel="noopener"&gt;Container-only builds documentation&lt;/A&gt;.&lt;/P&gt;
&lt;H2&gt;Getting Started&lt;/H2&gt;
&lt;P&gt;Ready to build your first package and container image? The &lt;A href="https://project-dalec.github.io/dalec/quickstart" target="_blank" rel="noopener"&gt;Dalec Quickstart&lt;/A&gt; walks you through building go-md2man, a real-world example that demonstrates:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Creating a declarative YAML specification&lt;/LI&gt;
&lt;LI&gt;Building RPM and DEB packages&lt;/LI&gt;
&lt;LI&gt;Creating minimal container images&lt;/LI&gt;
&lt;LI&gt;Multi-architecture builds&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The quickstart shows how a single spec can produce packages for multiple distributions (Azure Linux, Ubuntu, Debian) and architectures (x86_64, ARM64) using standard docker build commands.&lt;/P&gt;
&lt;H2&gt;CI/CD Integration&lt;/H2&gt;
&lt;P&gt;Dalec integrates seamlessly into any CI/CD system that supports Docker, including GitHub Actions, GitLab CI, Jenkins, and cloud-native build systems. Since Dalec builds use standard docker build commands, no special runners or build agents are required—just a standard Docker environment.&lt;/P&gt;
&lt;P&gt;Thanks to BuildKit, Dalec can also build directly on Kubernetes clusters using the &lt;A href="https://docs.docker.com/build/builders/drivers/kubernetes/" target="_blank" rel="noopener"&gt;Kubernetes driver&lt;/A&gt;. This enables scalable, cloud-native builds without requiring dedicated build VMs, making it ideal for large-scale CI/CD pipelines.&lt;/P&gt;
&lt;P&gt;This makes it easy to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Build and publish packages on every release&lt;/LI&gt;
&lt;LI&gt;Generate multi-architecture artifacts in parallel&lt;/LI&gt;
&lt;LI&gt;Integrate SBOM generation and signing into your pipeline&lt;/LI&gt;
&lt;LI&gt;Push containers to any OCI-compliant registry&lt;/LI&gt;
&lt;LI&gt;Scale builds elastically on Kubernetes infrastructure&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Supply Chain Security&lt;/H2&gt;
&lt;H3&gt;SBOM and Provenance Attestations&lt;/H3&gt;
&lt;P&gt;BuildKit provides integrated support for Software Bill of Materials (SBOM) generation and provenance attestations. SBOMs automatically catalog every package and dependency in your build, providing complete transparency into what’s inside your artifacts. Provenance attestations are cryptographically-signed records that prove how your image was built, including the source repository, build parameters, and execution environment. This enables verification throughout your deployment pipeline. Learn more about configuring these features in the &lt;A href="https://docs.docker.com/build/metadata/attestations/" target="_blank" rel="noopener"&gt;BuildKit attestations documentation&lt;/A&gt;.&lt;/P&gt;
&lt;H3&gt;Package Signing&lt;/H3&gt;
&lt;P&gt;Dalec supports GPG &lt;A href="https://project-dalec.github.io/dalec/signing" target="_blank" rel="noopener"&gt;signing of packages&lt;/A&gt; for additional trust and verification:&lt;/P&gt;
&lt;PRE&gt;package_config:&lt;BR /&gt;&amp;nbsp; signer:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; frontend:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; image: &amp;lt;signer-image&amp;gt;&lt;/PRE&gt;
&lt;P&gt;Signed packages ensure recipients can verify authenticity and detect tampering.&lt;/P&gt;
&lt;H2&gt;Conclusion&lt;/H2&gt;
&lt;P&gt;Dalec provides a modern, declarative approach to building system packages and containers. By leveraging Docker BuildKit as a frontend, it eliminates the need for complex build toolchains while providing secure builds across multiple Linux distributions.&lt;/P&gt;
&lt;P&gt;For open-source projects that need to distribute both packages and containers, Dalec offers a unified build process that simplifies CI/CD while strengthening supply chain security.&lt;/P&gt;
&lt;P&gt;Whether you’re migrating from traditional RPM spec files, consolidating Dockerfiles, or building a new project from scratch, Dalec provides the simplicity of modern container tools with the flexibility of native package formats.&lt;/P&gt;
&lt;H2&gt;Get Started&lt;/H2&gt;
&lt;P&gt;Ready to try Dalec? You’re just one docker build command away:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Explore the documentation&lt;/STRONG&gt;: &lt;A href="https://project-dalec.github.io/dalec/" target="_blank" rel="noopener"&gt;project-dalec.github.io/dalec&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Try the &lt;/STRONG&gt;&lt;A href="https://project-dalec.github.io/dalec/quickstart" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;quickstart tutorial&lt;/STRONG&gt;&lt;/A&gt;: Build your first package and container&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Browse examples&lt;/STRONG&gt;: See real-world specs in the &lt;A href="https://github.com/Azure/dalec/tree/main/docs/examples" target="_blank" rel="noopener"&gt;examples directory&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Join the community&lt;/STRONG&gt;: Connect with us in the &lt;A class="lia-external-url" href="https://cloud-native.slack.com/archives/C09MHVDGMAB" target="_blank" rel="noopener"&gt;#dalec&lt;/A&gt; channel on &lt;A class="lia-external-url" href="https://communityinviter.com/apps/cloud-native/cncf" target="_blank" rel="noopener"&gt;CNCF Slack&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Contribute&lt;/STRONG&gt;: Dalec is an open-source project under the CNCF. Contributions are welcome! Check out the &lt;A href="https://github.com/project-dalec/dalec/blob/main/CONTRIBUTING.md" target="_blank" rel="noopener"&gt;contributing guide&lt;/A&gt; to get involved&lt;/LI&gt;
&lt;/OL&gt;</description>
      <pubDate>Wed, 29 Oct 2025 21:34:49 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/dalec-declarative-package-and-container-builds/ba-p/4465290</guid>
      <dc:creator>SertacOzercan</dc:creator>
      <dc:date>2025-10-29T21:34:49Z</dc:date>
    </item>
    <item>
      <title>Red Hat Enterprise Linux Software Reservations Now Available</title>
      <link>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/red-hat-enterprise-linux-software-reservations-now-available/ba-p/4463214</link>
      <description>&lt;H2&gt;&lt;SPAN data-contrast="auto"&gt;What's New&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;After careful collaboration with Red Hat, we've updated our RHEL pay-as-you-go billing meters and reservation pricing to align with Red Hat's latest pricing model. These updates address previous billing meter issues and ensure an accurate, transparent experience for our customers.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Starting today, customers can purchase RHEL software reservations on Azure with updated pricing, allowing organizations to &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;reduce their Linux workload costs&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt; with the flexibility and reliability of software reservations.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN data-contrast="auto"&gt;Key Benefits of RHEL Software Reservations&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Cost savings&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;: Save up to 24% compared to pay-as-you-go pricing by committing to a one-year term &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Predictable costs&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;: Lock in pricing for your RHEL workloads and optimize your cloud budget&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="1"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Pricing clarity&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;: Updated meters aligned with Red Hat's current pricing model&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="4" data-aria-level="1"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Seamless Azure integration&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;: Manage your RHEL software reservations alongside other Azure resources.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;RHEL software reservations allow you to pre-purchase RHEL software capacity at a discounted rate, delivering significant savings over standard pay-as-you-go pricing. &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN data-contrast="auto"&gt;Get Started Today&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;RHEL software reservations are available now on the Azure portal. To learn more about pricing, terms, and how to purchase them, visit the following pages:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;A href="https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Pricing - Linux Virtual Machines | Microsoft Azure&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/virtual-machines/linux/prepay-suse-software-charges" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Prepay for software plans - Azure Reservations - Azure Virtual Machines | Microsoft Learn&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="1"&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/understand-rhel-reservation-charges" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Red Hat reservation plan discounts - Azure - Microsoft Cost Management | Microsoft Learn&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="4" data-aria-level="1"&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/save-compute-costs-reservations" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;What are Azure Reservations? - Microsoft Cost Management | Microsoft Learn&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;We're committed to providing transparent, reliable billing for all Azure services and appreciate your continued partnership as we deliver the best cloud platform for your open-source workloads.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;For questions or support, please contact Azure Support or your Microsoft account team.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 27 Oct 2025 16:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/red-hat-enterprise-linux-software-reservations-now-available/ba-p/4463214</guid>
      <dc:creator>abbottkarl</dc:creator>
      <dc:date>2025-10-27T16:00:00Z</dc:date>
    </item>
    <item>
      <title>Drasi is Fluent in GQL: Integrating the New Graph Query Standard</title>
      <link>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/drasi-is-fluent-in-gql-integrating-the-new-graph-query-standard/ba-p/4458888</link>
      <description>&lt;P&gt;&lt;A href="https://drasi.io/" target="_blank" rel="noopener"&gt;Drasi&lt;/A&gt; , the open-source Rust data change processing platform, simplifies the creation of change-driven systems through continuous queries, reactions, and clearly defined change semantics. Continuous queries enable developers to specify precisely what data changes matter, track these changes in real-time, and react immediately as changes occur. Unlike traditional database queries, which provide static snapshots of data, continuous queries constantly maintain an up-to-date view of query results, automatically notifying reactions of precise additions, updates, and deletions to the result set as they happen.&lt;/P&gt;
&lt;P&gt;To date, Drasi has supported only &lt;A href="https://drasi.io/reference/query-language/cypher/" target="_blank" rel="noopener"&gt;openCypher&lt;/A&gt; for writing continuous queries; openCypher is a powerful declarative graph query language. Recently, Drasi has added support for &lt;A href="https://drasi.io/reference/query-language/gql/" target="_blank" rel="noopener"&gt;Graph Query Language (GQL)&lt;/A&gt;, the new international ISO standard for querying property graphs. In this article, we describe what GQL means for writing continuous queries and describe how we implemented GQL Support.&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;A Standardized Future for Graph Queries&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;GQL is the first officially standardized database language since SQL in 1987. Published by &lt;A href="https://www.iso.org/standard/76120.html" target="_blank" rel="noopener"&gt;ISO/IEC in April 2024&lt;/A&gt;, it defines a global specification for querying property graphs. Unlike the relational model that structures data into tables, the property graph model structures data inside of the database as a graph. With GQL support, Drasi enables users to benefit from a query language that we expect to be widely adopted across the database industry, ensuring compatibility with future standards in graph querying.&lt;/P&gt;
&lt;P&gt;Drasi continues to support openCypher, allowing users to select the query language that best fits their requirements and existing knowledge. With the introduction of GQL, Drasi users can now write continuous queries using the new international standard.&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;Example GQL Continuous Query: Counting Unique Messages&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;Event-driven architectures traditionally involve overhead for parsing event payloads, filtering irrelevant data, and managing contextual state to identify precise data transitions. Drasi eliminates much of this complexity through continuous queries, which maintain accurate real-time views of data and generate change notifications.&lt;/P&gt;
&lt;P&gt;Imagine a simple database with a message table containing the text of each message. Suppose you want to know, in real-time, how many times the same message has been sent. Traditionally, addressing these types of scenarios involves polling databases at set intervals, using middleware to detect state changes, and developing custom logic to handle reactions. It could also mean setting up change data capture (CDC) to feed a message broker and process events through a stream processing system. These methods can quickly become complex and difficult, especially when handling numerous or more sophisticated scenarios.&lt;/P&gt;
&lt;P&gt;Drasi simplifies this process by employing a change-driven architecture. Rather than relying on polling or other methods, Drasi uses continuous queries that actively monitor data for specific conditions. The moment a specified condition is met or changes, Drasi proactively sends notifications, ensuring real-time responsiveness.&lt;/P&gt;
&lt;P&gt;The following example shows the continuous query in GQL that counts the frequency of each unique message:&lt;/P&gt;
&lt;PRE&gt;&lt;SPAN class="lia-text-color-9"&gt;MATCH&lt;/SPAN&gt; &lt;BR /&gt; &amp;nbsp;&amp;nbsp;(m:Message)&lt;BR /&gt;&lt;SPAN class="lia-text-color-9"&gt;LET&lt;/SPAN&gt; Message = m.Message&lt;BR /&gt;&amp;nbsp;&lt;SPAN class="lia-text-color-9"&gt;RETURN&lt;/SPAN&gt;&lt;BR /&gt; &amp;nbsp;&amp;nbsp;Message,&lt;BR /&gt; &amp;nbsp;&amp;nbsp;count(Message) &lt;SPAN class="lia-text-color-9"&gt;AS&lt;/SPAN&gt; Frequency&lt;BR /&gt;&lt;BR /&gt;&lt;/PRE&gt;
&lt;P&gt;You can explore this example in the Drasi &lt;A href="https://drasi.io/getting-started" target="_blank" rel="noopener"&gt;Getting Started tutorial&lt;/A&gt;.&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;Key Features &lt;/STRONG&gt;&lt;STRONG&gt;of the GQL Language&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;OpenCypher had a significant influence on GQL and there are many things in common between the two languages; however, there are also some important differences.&lt;/P&gt;
&lt;P&gt;A new statement introduced in GQL is NEXT, which enables linear composition of multiple statements. It forms a pipeline where each subsequent statement receives the working table resulting from the previous statement.&lt;/P&gt;
&lt;P&gt;One application for NEXT is the ability to filter results after an aggregation. For example, to find colors associated with more than five vehicles, the following query can be used:&lt;/P&gt;
&lt;PRE&gt;&lt;SPAN class="lia-text-color-9"&gt;MATCH &lt;/SPAN&gt;(v:Vehicle)&lt;BR /&gt;&lt;SPAN class="lia-text-color-9"&gt;RETURN&lt;/SPAN&gt; v.color AS color, count(v) AS vehicle_count&lt;BR /&gt;&lt;SPAN class="lia-text-color-9"&gt;NEXT FILTER&lt;/SPAN&gt; vehicle_count &amp;gt; 5&lt;BR /&gt;&lt;SPAN class="lia-text-color-9"&gt;RETURN&lt;/SPAN&gt; color, vehicle_count&lt;BR /&gt;&lt;BR /&gt;&lt;/PRE&gt;
&lt;P&gt;Equivalent openCypher:&lt;/P&gt;
&lt;PRE&gt;&lt;SPAN class="lia-text-color-9"&gt;MATCH&lt;/SPAN&gt; (v:Vehicle)&lt;BR /&gt;&lt;SPAN class="lia-text-color-9"&gt;WITH &lt;/SPAN&gt;v.color &lt;SPAN class="lia-text-color-9"&gt;AS &lt;/SPAN&gt;color, count(v) &lt;SPAN class="lia-text-color-9"&gt;AS &lt;/SPAN&gt;vehicle_count&lt;BR /&gt;&lt;SPAN class="lia-text-color-9"&gt;WHERE &lt;/SPAN&gt;vehicle_count &amp;gt; 5&lt;BR /&gt;&lt;SPAN class="lia-text-color-9"&gt;RETURN &lt;/SPAN&gt;color, vehicle_count&lt;BR /&gt;&lt;BR /&gt;&lt;/PRE&gt;
&lt;P&gt;GQL introduces additional clauses and statements: LET, YIELD, and FILTER.&lt;/P&gt;
&lt;P&gt;The LET statement allows users to define new variables or computed fields for every row in the current working table. Each LET expression can reference existing columns in scope, and the resulting variables are added as new columns.&lt;/P&gt;
&lt;P&gt;Example:&lt;/P&gt;
&lt;PRE&gt;&lt;SPAN class="lia-text-color-9"&gt;MATCH &lt;/SPAN&gt;(v:Vehicle)&lt;BR /&gt;&lt;SPAN class="lia-text-color-9"&gt;LET &lt;/SPAN&gt;makeAndModel = v.make + ' &amp;nbsp;' + v.model&lt;BR /&gt;&lt;SPAN class="lia-text-color-9"&gt;RETURN &lt;/SPAN&gt;makeAndModel, v.year&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Equivalent openCypher:&lt;/P&gt;
&lt;PRE&gt;&lt;SPAN class="lia-text-color-9"&gt;MATCH &lt;/SPAN&gt;(v:Vehicle)&lt;BR /&gt;&lt;SPAN class="lia-text-color-9"&gt;WITH&lt;/SPAN&gt; v, v.make + ' &amp;nbsp;' + v.model AS makeAndModel&lt;BR /&gt;&lt;SPAN class="lia-text-color-9"&gt;RETURN &lt;/SPAN&gt;makeAndModel, v.year&lt;BR /&gt;&lt;BR /&gt;&lt;/PRE&gt;
&lt;P&gt;The YIELD clause projects and optionally renames columns from the working table, limiting the set of columns available in scope. Only specified columns remain in scope after YIELD.&lt;/P&gt;
&lt;P&gt;Example:&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-9"&gt;MATCH &lt;/SPAN&gt;(v:Vehicle)-[e:LOCATED_IN]-&amp;gt;(z:Zone)&lt;BR /&gt;&lt;SPAN class="lia-text-color-9"&gt;YIELD &lt;/SPAN&gt;v.color &lt;SPAN class="lia-text-color-9"&gt;AS &lt;/SPAN&gt;vehicleColor, z.type &lt;SPAN class="lia-text-color-9"&gt;AS &lt;/SPAN&gt;location&lt;BR /&gt;&lt;SPAN class="lia-text-color-9"&gt;RETURN &lt;/SPAN&gt;vehicleColor, location&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;FILTER is a standalone statement that removes rows from the current working table based on a specified condition. While GQL still supports a WHERE clause for filtering during the MATCH phase, the FILTER statement provides additional flexibility by allowing results to be filtered after previous steps. It does not create a new table; instead, it updates the working table. Unlike openCypher’s WHERE clause, which is tied to a MATCH or WITH, GQL's FILTER can be applied independently at various points in the query pipeline.&lt;/P&gt;
&lt;P&gt;Example:&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-9"&gt;MATCH &lt;/SPAN&gt;(n:Person)&lt;BR /&gt;&lt;SPAN class="lia-text-color-9"&gt;FILTER &lt;/SPAN&gt;n.age &amp;gt; 30&lt;BR /&gt;&lt;SPAN class="lia-text-color-9"&gt;RETURN &lt;/SPAN&gt;n.name, n.age&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;GQL also provides control in how aggregations are grouped. The GROUP BY clause can be used to explicitly define the grouping keys, ensuring results are aggregated exactly as intended.&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-9"&gt;MATCH&lt;/SPAN&gt; (v:Vehicle)-[:LOCATED_IN]-&amp;gt;(z:Zone)&lt;BR /&gt;&lt;SPAN class="lia-text-color-9"&gt;RETURN &lt;/SPAN&gt;z.type &lt;SPAN class="lia-text-color-9"&gt;AS &lt;/SPAN&gt;zone_type, v.color &lt;SPAN class="lia-text-color-9"&gt;AS &lt;/SPAN&gt;vehicle_color, count(v) &lt;SPAN class="lia-text-color-9"&gt;AS &lt;/SPAN&gt;vehicle_count&lt;BR /&gt;&lt;SPAN class="lia-text-color-9"&gt;GROUP BY&lt;/SPAN&gt; zone_type, vehicle_color&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If the GROUP BY clause is omitted, GQL defaults to an implicit grouping behavior, having all non-aggregated columns in the RETURN clause automatically used as the grouping keys.&lt;/P&gt;
&lt;P&gt;While many of the core concepts, like pattern matching, projections, and filtering, will feel familiar to openCypher users, GQL’s statements are distinct in their usage. Supporting these differences in Drasi required design changes, described in the following section, that led to multiple query languages within the platform.&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;Refactoring Drasi for Multi-Language Query Support&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;Instead of migrating Drasi from openCypher to GQL, we saw this as an opportunity to address multi-language support in the system. Drasi's initial architecture was designed exclusively for openCypher. In this model, the query parser generated an Abstract Syntax Tree (AST) for openCypher. The execution engine was designed to process this AST format, executing the query it represented to produce the resulting dataset. Built‑in functions (such as toUpper() for string case conversion) followed openCypher naming and were implemented within the same module as the engine. This created an architectural challenge for supporting additional query languages, such as GQL.&lt;/P&gt;
&lt;P&gt;To enable multi-language support, the system was refactored to separate the parsing, execution, and function management. A key insight was that the existing AST structure, originally created for openCypher, was flexible enough to be used for GQL. Although GQL and openCypher are different languages, their core operations, matching patterns, filtering data, and projecting results, could be represented by this AST.&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;The diagram shows the dependencies within this new architecture, highlighting the separation and interaction between the components. The language-specific function modules for openCypher and GQL provide the functions to the execution engine. The language-specific parsers for openCypher and GQL produce an AST, and the execution engine operates on this AST. The engine only needs to understand this AST format, making it language-agnostic.&lt;/P&gt;
&lt;P&gt;The AST structure is based on a sequence of QueryPart objects. Each QueryPart represents a distinct stage of the query, containing clauses for matching, filtering, and returning data. The execution engine processes these QueryParts sequentially.&lt;/P&gt;
&lt;PRE&gt;pub struct QueryPart {&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; pub match_clauses: Vec&amp;lt;MatchClause&amp;gt;,&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; pub where_clauses: Vec&amp;lt;Expression&amp;gt;,&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; pub return_clause: ProjectionClause,&lt;BR /&gt;}&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The process begins when a query is submitted in either GQL or openCypher. The query is first directed to its corresponding language-specific parser, which handles the lexical analysis and transforms the raw query string into the standardized AST. When data changes occur in the graph, the execution engine uses the MATCH clauses from the first QueryPart to find affected graph patterns and captures the matched data. This matched data then flows through each QueryPart in sequence. The WHERE portion of the AST filters out data that does not meet the specified conditions. The RETURN portion transforms the data by selecting specific fields, computing new values, or performing aggregations. Each QueryPart's output becomes the next one's input, creating a pipeline that incrementally produces query results as the underlying graph changes.&lt;/P&gt;
&lt;P&gt;To support functions from multiple languages in this AST, we introduced a function registry to abstract a function's name from its implementation. Function names can differ (e.g., toUpper() in openCypher versus Upper() in GQL). For any given query, language-specific modules populate this registry, mapping each function name to its corresponding behavior. Functions with shared logic can be implemented once in the engine and registered under multiple names in specific function crates, preventing code duplication. Meanwhile, language-exclusive functions can be registered and implemented separately within their respective modules. When processing an AST, the engine uses the registry attached to that query to resolve and execute the correct function. The separate function modules allow developers to introduce their own function registry, supporting custom implementations or names.&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;Conclusion&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;By adding support for GQL, &lt;A href="https://drasi.io/" target="_blank" rel="noopener"&gt;Drasi&lt;/A&gt; now offers developers a choice between openCypher and the new GQL standard. This capability ensures that teams can use the syntax that best fits their skills and project requirements. In addition, the architectural changes set the foundation for additional query languages.&lt;/P&gt;
&lt;P&gt;You can check out the code on our &lt;A href="https://github.com/drasi-project" target="_blank" rel="noopener"&gt;GitHub organization,&lt;/A&gt; dig into the technical details on our &lt;A href="https://drasi.io/" target="_blank" rel="noopener"&gt;documentation site&lt;/A&gt;, and join our developer community on &lt;A href="http://aka.ms/drasidiscord" target="_blank" rel="noopener"&gt;Discord&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Thu, 09 Oct 2025 18:21:36 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/drasi-is-fluent-in-gql-integrating-the-new-graph-query-standard/ba-p/4458888</guid>
      <dc:creator>CollinBrian</dc:creator>
      <dc:date>2025-10-09T18:21:36Z</dc:date>
    </item>
    <item>
      <title>The Open-Source Paradox: How Microsoft is giving back</title>
      <link>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/the-open-source-paradox-how-microsoft-is-giving-back/ba-p/4458630</link>
      <description>&lt;P&gt;The open-source community faces a paradox that threatens its very foundation. While open-source software now powers every major technology platform—from smartphones in our pockets to the cloud infrastructure running the world's largest applications—the developers and maintainers who create and sustain these critical projects are struggling under an unsustainable burden.&lt;/P&gt;
&lt;P&gt;The statistics paint a sobering picture of this crisis:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;🧑‍💻&lt;/STRONG&gt;&lt;STRONG&gt; 60% of open-source maintainers are unpaid&lt;/STRONG&gt; for their contributions, volunteering countless hours to maintain code that powers the global economy.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;🧠&lt;/STRONG&gt;&lt;STRONG&gt; 58% have quit or considered quitting&lt;/STRONG&gt; due to burnout, threatening the continuity of projects millions depend on daily.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;🏢&lt;/STRONG&gt;&lt;STRONG&gt; 90% of enterprises rely heavily on open source&lt;/STRONG&gt;, yet only about 60% encourage their employees to contribute back to the projects they depend on.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;💸&lt;/STRONG&gt;&lt;STRONG&gt; Only 14% of corporate open-source investment&lt;/STRONG&gt; goes directly to funding maintainers and projects, with the majority flowing to consulting and support services.&lt;/P&gt;
&lt;P&gt;This creates an unsustainable equation: critical infrastructure maintained by exhausted volunteers while billion-dollar companies extract value without proportional investment back into the ecosystem. At Microsoft, we have witnessed this challenge firsthand—and we believe we have a responsibility to be part of the solution.&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Why All Things Open Matters to Microsoft&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;This is precisely why I am excited to share that Microsoft will be joining the incredible community at &lt;A href="https://2025.allthingsopen.org/" target="_blank" rel="noopener"&gt;All Things Open 2025 in Raleigh, North Carolina, October 12–14&lt;/A&gt;. As a first-time presenting sponsor of this premier open-source conference, we are not just displaying our technology, we are demonstrating our commitment to addressing the sustainability challenges facing the open-source ecosystem.&lt;/P&gt;
&lt;P&gt;All Things Open represents everything we have come to understand about building healthy open-source communities: collaboration over competition, sustainability over short-term gains, and the shared belief that technology should empower everyone. With attendees from sixty-nine countries and a focus on real developer challenges, this conference provides the perfect platform to discuss not just what we are building, but how we are building it responsibly.&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;From Consumer to Contributor: Microsoft's Evolution&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;Our journey with open source began with a fundamental shift in perspective. We moved from being primarily consumers of open-source software to becoming one of the world's largest contributors to open-source projects. This was not just a strategic decision, but it was a recognition that sustainable technology requires sustainable communities.&lt;/P&gt;
&lt;P&gt;Today, Microsoft is among the top contributors to open-source projects on GitHub, employs full-time open-source maintainers, and partners with the community to strengthen open source for everyone. This commitment includes investments through GitHub, Azure credits, and direct funding of projects like &lt;A href="https://alpha-omega.dev/" target="_blank" rel="noopener"&gt;Alpha Omega&lt;/A&gt;—a Linux Foundation initiative working to secure the critical open-source software we all rely on. But numbers only tell part of the story.&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;What We're Bringing to Raleigh&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;&lt;A href="https://2025.allthingsopen.org/sessions/from-kernel-to-copilot-microsofts-open-source-journey-to-ai-at-scale" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Our Open-Source Journey: From Kernel to Copilot&lt;/STRONG&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Join me for my keynote on Monday morning (9:45–10:00 a.m. ET), "From Kernel to Copilot: Microsoft's Open-Source Journey to AI at Scale," where I'll explore how Microsoft's deep commitment to open source has evolved from contributing to the Linux kernel to building AI services that run at global scale. We will examine how open-source technologies power mission-critical workloads across Azure and GitHub, and how Microsoft became the largest cloud provider contributor to Cloud Native Computing Foundation projects. I will also spotlight new initiatives like Radius, Dalec, and Copacetic that reflect our vision for a more collaborative, cloud-native future.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Practical Solutions in Action&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;In my track session, &lt;A href="https://2025.allthingsopen.org/sessions/using-ai-agents-to-empower-application-modernization-for-kubernetes" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;"Using AI Agents to Empower Application Modernization for Kubernetes,"&lt;/STRONG&gt;&lt;/A&gt; I will demonstrate how open-source tooling combined with AI agents can reduce the friction of modernizing applications for cloud-native platforms. This is not about making development faster, it is about making complex modernization accessible to more developers, reducing the expertise barrier that often prevents teams from adopting cloud-native technologies.&lt;/P&gt;
&lt;H3&gt;&lt;EM&gt;Real Solutions at Our Booth&lt;/EM&gt;&lt;/H3&gt;
&lt;P&gt;Visit us at booths 20 and 21, where our team will be demonstrating how we are tackling sustainability challenges:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Linux on Azure:&lt;/STRONG&gt; How enterprise adoption drives investment back to Linux ecosystem development&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;AI-Powered Developer Tools:&lt;/STRONG&gt; Reducing the manual overhead of open-source maintenance.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;GitHub Programs:&lt;/STRONG&gt; Supporting open-source projects through various community initiatives.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Security Initiatives:&lt;/STRONG&gt; Automated vulnerability detection and resources to reduce security burden on maintainers.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&lt;EM&gt;Deep-Dive Theater Sessions&lt;/EM&gt;&lt;/H3&gt;
&lt;P&gt;Throughout the conference, we will be hosting focused discussions on:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;The Economics of Open Source:&lt;/STRONG&gt; Creating sustainable funding models for critical projects.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Maintainer Mental Health:&lt;/STRONG&gt; Building support systems and prevent burnout in open-source communities.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Enterprise Open-Source Strategies:&lt;/STRONG&gt; How companies can contribute meaningfully to the projects they depend on&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Also, do not miss the following demos and theater sessions in our booth:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;From "Open" to "Operable": ARO Virtualization + Arc-Enabled Hybrid, with OSS-First Workflows&lt;/STRONG&gt;&lt;BR /&gt;This session shows how OpenShift on Azure (ARO) can run open-source-native workflows—GitOps, KubeVirt-based virtualization—while staying portable across clouds and data centers via Azure Arc. We will deploy and onboard an OpenShift cluster to Arc, bootstrap a GitOps app from a public repo, and spin up a Fedora VM with OpenShift Virtualization in under 10 minutes.&lt;BR /&gt;&lt;EM&gt;By Joel Sisko, Solution Engineer, Microsoft&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;AKS Automatic: Streamlining AI-Enabled Application Deployment&lt;/STRONG&gt;&lt;BR /&gt;Learn how AKS Automatic simplifies deploying AI-enabled applications to Kubernetes using GitHub or Azure DevOps. This session covers automated cluster setup with best practices, built-in monitoring, and alerts, and streamlined Day Zero to Day Two operations—including scaling, GitHub Actions, and observability. Focus on innovation while AKS Automatic handles the infrastructure.&lt;BR /&gt;&lt;EM&gt;By Joel Schluchter, GBB, Microsoft&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Enabling AI Agents with Real-Time Linux and Kubernetes Observability via Model Context Protocol&lt;/STRONG&gt;&lt;BR /&gt;Discover how Microsoft leverages open-source tools and the Model Context Protocol (MCP) to give AI agents real-time observability in Linux and Kubernetes environments. Learn how Inspektor Gadget and the AKS MCP Server enable seamless diagnostics, monitoring, and security from familiar tools like VS Code and GitHub Copilot—accelerating innovation and democratizing observability for cloud-native apps.&lt;BR /&gt;&lt;EM&gt;By Ron Abellera, Microsoft GBB&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;The Open-Source AI Editor&lt;/STRONG&gt;&lt;BR /&gt;VS Code recently open-sourced its AI capabilities! You might be asking: Why? How can I contribute? What about the forks?! We will answer these questions and more as we walk through VS Code's journey to becoming the open-source AI editor.&lt;BR /&gt;&lt;EM&gt;By Olivia Guzzardo McVicker, Senior Cloud Advocate, Microsoft&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Demonstrate the new GHCP App Modernization tool which allows customers to easily upgrade existing Java &amp;amp; .NET applications to newer versions&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;Keeping enterprise applications modern is often a months-long effort—auditing codebases, updating dependencies, applying patches, and preparing services for the cloud. With GitHub Copilot app modernization, that process gets radically simplified. In this session, we will show how you can take a legacy Java application and upgrade it in minutes. Copilot now provides end-to-end support for Java and .NET projects, delivering automated assessment reports, code transformations, build patching, dependency updates, and even containerization for cloud deployment. Whether you are migrating to the cloud or simply keeping pace with evolving frameworks, Copilot streamlines the process and reduces the risk, letting your team focus on delivering value instead of wrestling with upgrades. See how you can modernize faster, safer, and smarter.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;By Ayan Gupta, Advocacy Program Manager, Microsoft&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;And much more...&lt;/P&gt;
&lt;H1&gt;&lt;STRONG&gt;Beyond Technology: Building Community&lt;/STRONG&gt;&lt;/H1&gt;
&lt;P&gt;What excites me most about All Things Open is not just the opportunity to highlight solutions, it is the chance to learn from maintainers, contributors, and users about what works. No single company or initiative can resolve open source's sustainability crisis. It requires the collective wisdom and effort of the entire community.&lt;/P&gt;
&lt;H1&gt;&lt;STRONG&gt;The Path Forward&lt;/STRONG&gt;&lt;/H1&gt;
&lt;P&gt;The open-source sustainability challenge is not just a technical problem—it is a community problem that requires community solutions. As we gather in Raleigh, we have an opportunity to move beyond talking about these challenges to collaboratively building solutions.&lt;/P&gt;
&lt;P&gt;Whether you are maintaining a critical library used by millions, contributing to emerging AI frameworks, or building the next generation of developer tools, your perspective matters. The solutions we build together at events like All Things Open will determine whether open source continues to thrive for the next generation of developers.&lt;/P&gt;
&lt;H1&gt;&lt;STRONG&gt;Join the Conversation&lt;/STRONG&gt;&lt;/H1&gt;
&lt;P&gt;If you are attending All Things Open 2025, I invite you to be part of this crucial conversation. Share your experiences, challenge our assumptions, and help us build a more sustainable future for open source.&lt;/P&gt;
&lt;P&gt;Find me at our booth, attend my keynote, and track sessions, or connect directly. Because the future of open source is not something that happens to us, it is something we build together.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Lachie Evenson leads Microsoft's open-source strategy and community programs. Connect with him on &lt;/EM&gt;&lt;A href="https://lachie.bsky.social/" target="_blank" rel="noopener"&gt;&lt;EM&gt;Bluesky&lt;/EM&gt;&lt;/A&gt;&lt;EM&gt;, &lt;/EM&gt;&lt;A href="https://x.com/LachlanEvenson" target="_blank" rel="noopener"&gt;&lt;EM&gt;X&lt;/EM&gt;&lt;/A&gt;&lt;EM&gt;, &lt;/EM&gt;&lt;A href="https://linkedin.com/in/LachlanEvenson" target="_blank" rel="noopener"&gt;&lt;EM&gt;LinkedIn&lt;/EM&gt;&lt;/A&gt;&lt;EM&gt;, &lt;/EM&gt;&lt;A href="https://github.com/lachie83" target="_blank" rel="noopener"&gt;&lt;EM&gt;GitHub&lt;/EM&gt;&lt;/A&gt;&lt;EM&gt;, or at All Things Open 2025.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Learn more about Microsoft's open-source initiatives at &lt;/EM&gt;&lt;A href="https://opensource.microsoft.com/" target="_blank" rel="noopener"&gt;&lt;EM&gt;opensource.microsoft.com&lt;/EM&gt;&lt;/A&gt;&lt;EM&gt;. Register for All Things Open 2025 at &lt;/EM&gt;&lt;A href="https://2025.allthingsopen.org/" target="_blank" rel="noopener"&gt;&lt;EM&gt;https://2025.allthingsopen.org/&lt;/EM&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Sources:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://tidelift.com/blog/2023-open-source-maintainer-survey-results" target="_blank" rel="noopener"&gt;Tidelift 2023 Open-Source Maintainer Survey&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://www.linuxfoundation.org/research/global-spotlight-2023" target="_blank" rel="noopener"&gt;Linux Foundation Global Spotlight 2023&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://www.linuxfoundation.org/research/open-source-software-funding-2024" target="_blank" rel="noopener"&gt;Linux Foundation Open-Source Software Funding 2024&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://alpha-omega.dev/" target="_blank" rel="noopener"&gt;Alpha Omega&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Resources:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="•%09https:/azure.microsoft.com/en-us/blog/microsofts-open-source-journey-from-20000-lines-of-linux-code-to-ai-at-global-scale/?msockid=1bcfac72982c642f2cbeba4b995a65e1" target="_blank" rel="noopener"&gt;Microsoft’s open-source journey: From 20,000 lines of Linux code to AI at global scale&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://techcommunity.microsoft.com/blog/appsonazureblog/red-hat-openshift-virtualization-on-azure-red-hat-openshift-in-public-preview/4409301" target="_blank" rel="noopener"&gt;Red Hat OpenShift Virtualization on Azure Red Hat OpenShift in Public Preview&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/blog/azure-kubernetes-service-automatic-fast-and-frictionless-kubernetes-for-all/" target="_blank" rel="noopener"&gt;Introducing AKS automatic&amp;nbsp;&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Thu, 02 Oct 2025 17:45:28 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/the-open-source-paradox-how-microsoft-is-giving-back/ba-p/4458630</guid>
      <dc:creator>LachlanEvenson</dc:creator>
      <dc:date>2025-10-02T17:45:28Z</dc:date>
    </item>
    <item>
      <title>Introducing Image Customizer for Azure Linux</title>
      <link>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/introducing-image-customizer-for-azure-linux/ba-p/4454859</link>
      <description>&lt;P&gt;We are excited to release Image Customizer, an open-source tool, built and maintained by the Azure Linux team. Image Customizer lets you customize well-tested existing Azure Linux images for any scenario in just minutes. Already trusted by first party teams like LinkedIn, Azure Frontdoor, and Azure Nexus in production, this tool is designed to make image customization simple, reliable, and fast. With full dm-verity support for enhanced security, it also supports customization of &lt;A class="lia-external-url" href="https://aka.ms/AzureLinuxOSGuard" target="_blank" rel="noopener"&gt;Azure Linux with OS Guard&lt;/A&gt; images.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Unlike VM-based image customization, Image Customizer directly modifies the image without booting a VM using a chroot-based approach, making customization faster, more reliable, and easier to integrate into existing workflows.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P class="lia-align-center"&gt;&lt;STRONG class="lia-align-center"&gt;✨&lt;/STRONG&gt;&lt;STRONG class="lia-align-center"&gt; Get Image Customizer &lt;/STRONG&gt;&lt;A href="https://mcr.microsoft.com/en-us/artifact/mar/azurelinux/imagecustomizer/about" target="_blank" rel="noopener"&gt;&lt;STRONG class="lia-align-center"&gt;here&lt;/STRONG&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;STRONG&gt;✨ Explore our documentation&amp;nbsp;&lt;/STRONG&gt;&lt;A href="https://aka.ms/ImageCustomizer" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;here&lt;/STRONG&gt;&lt;/A&gt;.&lt;STRONG&gt; &lt;/STRONG&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H4&gt;&lt;STRONG&gt;Why Choose Image Customizer?&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;STRONG&gt;Direct, Reliable Customization&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL class="lia-indent-padding-left-30px"&gt;
&lt;LI class="lia-indent-padding-left-30px" style="list-style-type: none;"&gt;
&lt;UL class="lia-indent-padding-left-30px"&gt;
&lt;LI class="lia-indent-padding-left-30px"&gt;Build on top of bootable, tested, and supported base images.&lt;/LI&gt;
&lt;LI class="lia-indent-padding-left-30px"&gt;Lower overhead and fewer side effects by avoiding VM boot.&lt;/LI&gt;
&lt;LI class="lia-indent-padding-left-30px"&gt;No need to rely on the Azure Linux Toolkit. Previously, building from scratch meant your image may fail to boot sometimes. Image Customizer reduces that risk.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;STRONG&gt;Clean and Lightweight&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL class="lia-indent-padding-left-30px"&gt;
&lt;LI class="lia-indent-padding-left-30px" style="list-style-type: none;"&gt;
&lt;UL class="lia-indent-padding-left-30px"&gt;
&lt;LI class="lia-indent-padding-left-30px"&gt;Minimal dependencies for a streamlined setup (for example, no SSH required).&lt;/LI&gt;
&lt;LI class="lia-indent-padding-left-30px"&gt;You only need to invoke one command to run Image Customizer. It is available as a container with all its dependencies bundled for easy integration into CI/CD pipelines.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;STRONG&gt;Versatile and Powerful&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL class="lia-indent-padding-left-30px"&gt;
&lt;LI class="lia-indent-padding-left-30px" style="list-style-type: none;"&gt;
&lt;UL class="lia-indent-padding-left-30px"&gt;
&lt;LI class="lia-indent-padding-left-30px"&gt;Supported input formats: vhd, vhdx, qcow2, PXE bootable artifacts, raw and iso created by Image Customizer.&lt;/LI&gt;
&lt;LI class="lia-indent-padding-left-30px"&gt;Supported output formats: vhd, vhd-fixed, vhdx, qcow2, raw, iso, and cosi.&lt;/LI&gt;
&lt;LI class="lia-indent-padding-left-30px"&gt;Perform a wide range of operations: add/remove/update packages, add files and directories, create/update users, enable/disable services, customize partitions, image history, dm-verity and more. Full list of supported operations can be found &lt;A href="https://microsoft.github.io/azure-linux-image-tools/imagecustomizer/api/configuration.html" target="_blank" rel="noopener"&gt;here&lt;/A&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;STRONG&gt;Cross-Platform Compatibility&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL class="lia-indent-padding-left-30px"&gt;
&lt;LI class="lia-indent-padding-left-30px" style="list-style-type: none;"&gt;
&lt;UL class="lia-indent-padding-left-30px"&gt;
&lt;LI class="lia-indent-padding-left-30px"&gt;Tested and verified to work on Ubuntu 22.04, Azure Linux 3.0 and WSL2 (Windows Subsystem for Linux). While officially tested on these platforms, Image Customizer will likely work on other Linux distributions as well.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;STRONG&gt;Consistent and Predictable Builds&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL class="lia-indent-padding-left-30px"&gt;
&lt;LI class="lia-indent-padding-left-30px" style="list-style-type: none;"&gt;
&lt;UL class="lia-indent-padding-left-30px"&gt;
&lt;LI class="lia-indent-padding-left-30px"&gt;Use &lt;A href="https://microsoft.github.io/azure-linux-image-tools/imagecustomizer/api/cli.html#--package-snapshot-time" target="_blank" rel="noopener"&gt;--package-snapshot-time&lt;/A&gt; or &lt;A href="https://microsoft.github.io/azure-linux-image-tools/imagecustomizer/api/configuration/packages.html#snapshottime-string" target="_blank" rel="noopener"&gt;snapshotTime&lt;/A&gt; to filter packages by publication timestamp, ensuring only packages available at that point in time are considered. This prevents unexpected changes from newer package versions when reusing configuration files across time.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4&gt;&lt;STRONG&gt;Getting &lt;/STRONG&gt;&lt;STRONG&gt;Started with Image Customizer&lt;/STRONG&gt;&lt;/H4&gt;
&lt;img /&gt;
&lt;P&gt;To use Image Customizer, you’ll need a configuration file that describes the changes you want to make, using the &lt;A class="lia-external-url" href="https://microsoft.github.io/azure-linux-image-tools/imagecustomizer/api/configuration.html" target="_blank" rel="noopener"&gt;Declarative API&lt;/A&gt; provided by Image Customizer. Next, select a&amp;nbsp;&lt;A class="lia-external-url" href="https://github.com/microsoft/azurelinux/blob/3.0/toolkit/docs/quick_start/quickstart.md" target="_blank" rel="noopener"&gt;base Azure Linux image&lt;/A&gt; as your foundation. With these two pieces in hand, you’re ready to run Image Customizer. The easiest way is to use the &lt;A class="lia-external-url" href="https://mcr.microsoft.com/en-us/artifact/mar/azurelinux/imagecustomizer/about" target="_blank" rel="noopener"&gt;Image Customizer container&lt;/A&gt;, which comes pre-packaged with all necessary dependencies and is recommended for most users. Alternatively, you can use the &lt;A class="lia-external-url" href="https://microsoft.github.io/azure-linux-image-tools/imagecustomizer/quick-start/quick-start-binary.html" target="_blank" rel="noopener"&gt;standalone executable binary&lt;/A&gt; if that better fits your workflow. In just a few minutes, Image Customizer will generate a modified Azure Linux image tailored to your needs. This process is designed to be repeatable and user-friendly, making it easy to add packages, files, users, make partition changes, and much more.&lt;/P&gt;
&lt;P&gt;To help you get started, we have a &lt;A href="https://microsoft.github.io/azure-linux-image-tools/imagecustomizer/quick-start/quick-start.html" target="_blank" rel="noopener"&gt;Quick Start&lt;/A&gt; &amp;nbsp;guide that walks you through your first customization step by step. For those who want to explore further, comprehensive API documentation is available, covering both &lt;A href="https://microsoft.github.io/azure-linux-image-tools/imagecustomizer/api/cli.html" target="_blank" rel="noopener"&gt;Command-line&lt;/A&gt;&amp;nbsp;usage and &lt;A href="https://microsoft.github.io/azure-linux-image-tools/imagecustomizer/api/configuration.html" target="_blank" rel="noopener"&gt;Configuration&lt;/A&gt; options.&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;Upcoming Community Call&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;Join our upcoming community call to learn more about using Image Customizer and see a live demo. We’ll cover best practices, advanced scenarios, and answer any questions you may have.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Date &amp;amp; Time&lt;/STRONG&gt;: September 25&lt;SUP&gt;th&lt;/SUP&gt;, 2025 at 8:00AM PST&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Teams Link: &lt;/STRONG&gt;&lt;A href="https://teams.microsoft.com/l/meetup-join/19%3ameeting_ZDcyZjRkYWMtOWQxYS00OTk3LWFhNmMtMTMwY2VhMTA4OTZi%40thread.v2/0?context=%7b%22Tid%22%3a%2272f988bf-86f1-41af-91ab-2d7cd011db47%22%2c%22Oid%22%3a%2271a6ce92-58a5-4ea0-96f4-bd4a0401370a%22%7d" target="_blank" rel="noopener"&gt;Azure Linux - External Community Call | Meeting-Join | Microsoft Teams&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Community Call Schedule:&lt;/STRONG&gt; &lt;A href="https://learn.microsoft.com/en-us/azure/azure-linux/support-help#stay-connected-with-azure-linux" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/azure/azure-linux/support-help#stay-connected-with-azure-linux&lt;/A&gt;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Help and Feedback&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;If you’d like to report bugs, request features, or contribute to the tool, you can do so directly through our&amp;nbsp;&lt;A href="https://github.com/microsoft/azure-linux-image-tools" target="_blank" rel="noopener"&gt;azure-linux-image-tools GitHub repo&lt;/A&gt;. We welcome feedback and contributions from the community!&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Acknowledgements&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;A huge thank you (in no order) to our Image Customizer team ─ Adit Jha, Brian Telfer, Chris Gunn, Deepu Thomas, Elaine Zhao, George Mileka, Himaja Kesari, Jim Perrin, Jiri Appl, Lanze Liu, Roaa Sakr, Kavya Nagalakunta and Vince Perri.&lt;/P&gt;</description>
      <pubDate>Thu, 18 Sep 2025 15:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/introducing-image-customizer-for-azure-linux/ba-p/4454859</guid>
      <dc:creator>Kavya_Nagalakunta</dc:creator>
      <dc:date>2025-09-18T15:00:00Z</dc:date>
    </item>
  </channel>
</rss>

