containers
360 TopicsBeyond the Desktop: The Future of Development with Microsoft Dev Box and GitHub Codespaces
The modern developer platform has already moved past the desktop. We’re no longer defined by what’s installed on our laptops, instead we look at what tooling we can use to move from idea to production. An organisations developer platform strategy is no longer a nice to have, it sets the ceiling for what’s possible, an organisation can’t iterate it's way to developer nirvana if the foundation itself is brittle. A great developer platform shrinks TTFC (time to first commit), accelerates release velocity, and maybe most importantly, helps alleviate everyday frictions that lead to developer burnout. Very few platforms deliver everything an organization needs from a developer platform in one product. Modern development spans multiple dimensions, local tooling, cloud infrastructure, compliance, security, cross-platform builds, collaboration, and rapid onboarding. The options organizations face are then to either compromise on one or more of these areas or force developers into rigid environments that slow productivity and innovation. This is where Microsoft Dev Box and GitHub Codespaces come into play. On their own, each addresses critical parts of the modern developer platform: Microsoft Dev Box provides a full, managed cloud workstation. Dev Box gives developers a consistent, high-performance environment while letting central IT apply strict governance and control. Internally at Microsoft, we estimate that usage of Dev Box by our development teams delivers savings of 156 hours per year per developer purely on local environment setup and upkeep. We have also seen significant gains in other key SPACE metrics reducing context-switching friction and improving build/test cycles. Although the benefits of Dev Box are clear in the results demonstrated by our customers it is not without its challenges. The biggest challenge often faced by Dev Box customers is its lack of native Linux support. At the time of writing and for the foreseeable future Dev Box does not support native Linux developer workstations. While WSL2 provides partial parity, I know from my own engineering projects it still does not deliver the full experience. This is where GitHub Codespaces comes into this story. GitHub Codespaces delivers instant, Linux-native environments spun up directly from your repository. It’s lightweight, reproducible, and ephemeral ideal for rapid iteration, PR testing, and cross-platform development where you need Linux parity or containerized workflows. Unlike Dev Box, Codespaces can run fully in Linux, giving developers access to native tools, scripts, and runtimes without workarounds. It also removes much of the friction around onboarding: a new developer can open a repository and be coding in minutes, with the exact environment defined by the project’s devcontainer.json. That said, Codespaces isn’t a complete replacement for a full workstation. While it’s perfect for isolated project work or ephemeral testing, it doesn’t provide the persistent, policy-controlled environment that enterprise teams often require for heavier workloads or complex toolchains. Used together, they fill the gaps that neither can cover alone: Dev Box gives the enterprise-grade foundation, while Codespaces provides the agile, cross-platform sandbox. For organizations, this pairing sets a higher ceiling for developer productivity, delivering a truly hybrid, agile and well governed developer platform. Better Together: Dev Box and GitHub Codespaces in action Together, Microsoft Dev Box and GitHub Codespaces deliver a hybrid developer platform that combines consistency, speed, and flexibility. Teams can spin up full, policy-compliant Dev Box workstations preloaded with enterprise tooling, IDEs, and local testing infrastructure, while Codespaces provides ephemeral, Linux-native environments tailored to each project. One of my favourite use cases is having local testing setups like a Docker Swarm cluster, ready to go in either Dev Box or Codespaces. New developers can jump in and start running services or testing microservices immediately, without spending hours on environment setup. Anecdotally, my time to first commit and time to delivering “impact” has been significantly faster on projects where one or both technologies provide local development services out of the box. Switching between Dev Boxes and Codespaces is seamless every environment keeps its own libraries, extensions, and settings intact, so developers can jump between projects without reconfiguring or breaking dependencies. The result is a turnkey, ready-to-code experience that maximizes productivity, reduces friction, and lets teams focus entirely on building, testing, and shipping software. To showcase this value, I thought I would walk through an example scenario. In this scenario I want to simulate a typical modern developer workflow. Let's look at a day in the life of a developer on this hybrid platform building an IOT project using Python and React. Spin up a ready-to-go workstation (Dev Box) for Windows development and heavy builds. Launch a Linux-native Codespace for cross-platform services, ephemeral testing, and PR work. Run "local" testing like a Docker Swarm cluster, database, and message queue ready to go out-of-the-box. Switch seamlessly between environments without losing project-specific configurations, libraries, or extensions. 9:00 AM – Morning Kickoff on Dev Box I start my day on my Microsoft Dev Box, which gives me a fully-configured Windows environment with VS Code, design tools, and Azure integrations. I select my teams project, and the environment is pre-configured for me through the Dev Box catalogue. Fortunately for me, its already provisioned. I could always self service another one using the "New Dev Box" button if I wanted too. I'll connect through the browser but I could use the desktop app too if I wanted to. My Tasks are: Prototype a new dashboard widget for monitoring IoT device temperature. Use GUI-based tools to tweak the UI and preview changes live. Review my Visio Architecture. Join my morning stand up. Write documentation notes and plan API interactions for the backend. In a flash, I have access to my modern work tooling like Teams, I have this projects files already preloaded and all my peripherals are working without additional setup. Only down side was that I did seem to be the only person on my stand up this morning? Why Dev Box first: GUI-heavy tasks are fast and responsive. Dev Box’s environment allows me to use a full desktop. Great for early-stage design, planning, and visual work. Enterprise Apps are ready for me to use out of the box (P.S. It also supports my multi-monitor setup). I use my Dev Box to make a very complicated change to my IoT dashboard. Changing the title from "IoT Dashboard" to "Owain's IoT Dashboard". I preview this change in a browser live. (Time for a coffee after this hardwork). The rest of the dashboard isnt loading as my backend isnt running... yet. 10:30 AM – Switching to Linux Codespaces Once the UI is ready, I push the code to GitHub and spin up a Linux-native GitHub Codespace for backend development. Tasks: Implement FastAPI endpoints to support the new IoT feature. Run the service on my Codespace and debug any errors. Why Codespaces now: Linux-native tools ensure compatibility with the production server. Docker and containerized testing run natively, avoiding WSL translation overhead. The environment is fully reproducible across any device I log in from. 12:30 PM – Midday Testing & Sync I toggle between Dev Box and Codespaces to test and validate the integration. I do this in my Dev Box Edge browser viewing my codespace (I use my Codespace in a browser through this demo to highlight the difference in environments. In reality I would leverage the VSCode "Remote Explorer" extension and its GitHub Codespace integration to use my Codespace from within my own desktop VSCode but that is personal preference) and I use the same browser to view my frontend preview. I update the environment variable for my frontend that is running locally in my Dev Box and point it at the port running my API locally on my Codespace. In this case it was a web socket connection and HTTPS calls to port 8000. I can make this public by changing the port visibility in my Codespace. https://fluffy-invention-5x5wp656g4xcp6x9-8000.app.github.dev/api/devices wss://fluffy-invention-5x5wp656g4xcp6x9-8000.app.github.dev/ws This allows me to: Preview the frontend widget on Dev Box, connecting to the backend running in Codespaces. Make small frontend adjustments in Dev Box while monitoring backend logs in Codespaces. Commit changes to GitHub, keeping both environments in sync and leveraging my CI/CD for deployment to the next environment. We can see the Dev Box running local frontend and the Codespace running the API connected to each other, making requests and displaying the data in the frontend! Hybrid advantage: Dev Box handles GUI previews comfortably and allows me to live test frontend changes. Codespaces handles production-aligned backend testing and Linux-native tools. Dev Box allows me to view all of my files in one screen with potentially multiple Codespaces running in browser of VS Code Desktop. Due to all of those platform efficiencies I have completed my days goals within an hour or two and now I can spend the rest of my day learning about how to enable my developers to inner source using GitHub CoPilot and MCP (Shameless plug). The bottom line There are some additional considerations when architecting a developer platform for an enterprise such as private networking and security not covered in this post but these are implementation details to deliver the described developer experience. Architecting such a platform is a valuable investment to deliver the developer platform foundations we discussed at the top of the article. While in this demo I have quickly built I was working in a mono repository in real engineering teams it is likely (I hope) that an application is built of many different repositories. The great thing about Dev Box and Codespaces is that this wouldn’t slow down the rapid development I can achieve when using both. My Dev Box would be specific for the project or development team, pre loaded with all the tools I need and potentially some repos too! When I need too I can quickly switch over to Codespaces and work in a clean isolated environment and push my changes. In both cases any changes I want to deliver locally are pushed into GitHub (Or ADO), merged and my CI/CD ensures that my next step, potentially a staging environment or who knows perhaps *Whispering* straight into production is taken care of. Once I’m finished I delete my Codespace and potentially my Dev Box if I am done with the project, knowing I can self service either one of these anytime and be up and running again! Now is there overlap in terms of what can be developed in a Codespace vs what can be developed in Azure Dev Box? Of course, but as organisations prioritise developer experience to ensure release velocity while maintaining organisational standards and governance then providing developers a windows native and Linux native service both of which are primarily charged on the consumption of the compute* is a no brainer. There are also gaps that neither fill at the moment for example Microsoft Dev Box only provides windows compute while GitHub Codespaces only supports VS Code as your chosen IDE. It's not a question of which service do I choose for my developers, these two services are better together! * Changes have been announced to Dev Box pricing. A W365 license is already required today and dev boxes will continue to be managed through Azure. For more information please see: Microsoft Dev Box capabilities are coming to Windows 365 - Microsoft Dev Box | Microsoft Learn541Views2likes0CommentsLeveraging Low Priority Pods for Rapid Scaling in AKS
If you're running workloads in Kubernetes, you'll know that scalability is key to keeping things available and responsive. But there's a problem: when your cluster runs out of resources, the node autoscaler needs to spin up new nodes, and this takes anywhere from 5 to 10 minutes. That's a long time to wait when you're dealing with a traffic spike. One way to handle this is using low priority pods to create buffer nodes that can be preempted when your actual workloads need the resources. The Problem Cloud-native applications are dynamic, and workload demands can spike quickly. Automatic scaling helps, but the delay in scaling up nodes when you run out of capacity can leave you vulnerable, especially in production. When a cluster runs out of available nodes, the autoscaler provisions new ones, and during that 5-10 minute wait you're facing: Increased Latency: Users experience lag or downtime whilst they're waiting for resources to become available. Resource Starvation: High-priority workloads don't get the resources they need, leading to degraded performance or failed tasks. Operational Overhead: SREs end up manually intervening to manage resource loads, which takes them away from more important work. This is enough reason to look at creating spare capacity in your cluster, and that's where low priority pods come in. The Solution The idea is pretty straightforward: you run low priority pods in your cluster that don't actually do any real work - they're just placeholders consuming resources. These pods are sized to take up enough space that the cluster autoscaler provisions additional nodes for them. Effectively, you're creating a buffer of "standby" nodes that are ready and waiting. When your real workloads need resources and the cluster is under pressure, Kubernetes kicks out these low priority pods to make room - this is called preemption. Essentially, Kubernetes looks at what's running, sees the low priority pods, and terminates them to free up the nodes. This happens almost immediately, and your high-priority workloads can use that capacity straight away. Meanwhile, those evicted low priority pods sit in a pending state, which triggers the autoscaler to spin up new nodes to replace the buffer you just used. The whole thing is self-maintaining. How Preemption Actually Works When a high-priority pod needs to be scheduled but there aren't enough resources, the Kubernetes scheduler kicks off preemption. This happens almost instantly compared to the 5-10 minute wait for new nodes. Here's what happens: Identification: The scheduler works out which low priority pods need to be evicted to make room. It picks the lowest priority pods first. Graceful Termination: The selected pods get a termination signal (SIGTERM) and a grace period (usually 30 seconds by default) to shut down cleanly. Resource Release: Once the low priority pods terminate, their resources are immediately released and available for scheduling. The high-priority pod can then be scheduled onto the node, typically within seconds. Buffer Pod Rescheduling: After preemption, the evicted low priority pods try to reschedule. If there's capacity on existing nodes, they'll land there. If not, they'll sit in a pending state, which triggers the cluster autoscaler to provision new nodes. This gives you a dual benefit: your critical workloads get immediate access to the nodes that were running low priority pods, and the system automatically replenishes the buffer in the background. Whilst your high-priority workloads are running on the newly freed capacity, the autoscaler is already provisioning replacement nodes for the evicted buffer pods. Your buffer capacity is continuously maintained without any manual work, so you're always ready for the next spike. The key advantage here is speed. Whilst provisioning a new node takes 5-10 minutes, preempting a low priority pod and scheduling a high-priority pod in its place typically completes in under a minute. Why This Approach Works Well Now that you understand how the solution works, let's look at why it's effective: Immediate Resource Availability: You maintain a pool of ready nodes that can rapidly scale up when needed. There's always capacity available to handle sudden load spikes without waiting for new nodes. Seamless Scaling: High-priority workloads never face resource starvation, even during traffic surges. They get immediate access to capacity, whilst the buffer automatically replenishes itself in the background. Self-Maintaining: Once set up, the system handles everything automatically. You don't need to manually manage the buffer or intervene when workloads spike. The Trade-Off Whilst low priority pods offer significant advantages for keeping your cluster responsive, you need to understand the cost implications. By maintaining buffer nodes with low priority pods, you're running machines that aren't hosting active, productive workloads. You're paying for additional infrastructure just for availability and responsiveness. These buffer nodes consume compute resources you're paying for, even though they're only running placeholder workloads. The decision for your organisation comes down to whether the improved responsiveness and elimination of that 5-10 minute scaling delay justifies the extra cost. For production environments with strict SLA requirements or where downtime is expensive, this trade-off is usually worth it. However, you'll want to carefully size your buffer capacity to balance cost with availability needs. Setting It Up Step 1: Define Your Low Priority Pod Configurations Start by defining low priority pods using the PriorityClass resource. This is where you create configurations that designate certain workloads as low priority. Here's what that configuration looks like: apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: low-priority value: 0 globalDefault: false description: "Priority class for buffer pods" --- apiVersion: apps/v1 kind: Deployment metadata: name: buffer-pods namespace: default spec: replicas: 3 # Adjust based on how much buffer capacity you need selector: matchLabels: app: buffer template: metadata: labels: app: buffer spec: priorityClassName: low-priority containers: - name: buffer-container image: registry.k8s.io/pause:3.9 # Lightweight image that does nothing resources: requests: cpu: "1000m" # Size these based on your typical workload needs memory: "2Gi" # Large enough to trigger node creation limits: cpu: "1000m" memory: "2Gi" The key things to note here: The PriorityClass has a value of 0, which is lower than the default priority for regular pods (typically 1000+) We're using a Deployment rather than individual pods so we can easily scale the buffer size The pause image is a minimal container that does basically nothing - perfect for a placeholder The resource requests are what matter - these determine how much space each buffer pod takes up You'll want to size the CPU and memory requests based on your actual workload needs Step 2: Deploy the Low Priority Pods Next, deploy these low priority pods across your cluster. Use affinity configurations to spread them out and let Kubernetes manage them. Step 3: Monitor and Adjust You'll want to monitor your deployment to make sure your buffer nodes are scaling up when needed and scaling down during idle periods to save costs. Tools like Prometheus and Grafana work well for monitoring resource usage and pod status so you can refine your setup over time. Best Practices Right-Sizing Your Buffer Pods: The resource requests for your low priority pods need careful thought. They need to be big enough to consume sufficient capacity that additional buffer nodes actually get provisioned by the autoscaler. But they shouldn't be so large that you end up over-provisioning beyond your required buffer size. Think about your typical workload resource requirements and size your buffer pods to create exactly the number of standby nodes you need. Regular Assessment: Keep assessing your scaling strategies and adjust based on what you're seeing with workload patterns and demands. Monitor how often your buffer pods are getting evicted and whether the buffer size makes sense for your traffic patterns. Communication and Documentation: Make sure your team understands what low priority pods do in your deployment and what this means for your SLAs. Document the cost of running your buffer nodes and why you're justifying this overhead. Automated Alerts: Set up alerts for when pod eviction happens so you can react quickly and make sure critical workloads aren't being affected. Also alert on buffer pod status to ensure your buffer capacity stays available. Wrapping Up Leveraging low priority pods to create buffer nodes is an effective way to handle resource constraints when you need rapid scaling and can't afford to wait for the node autoscaler. This approach is particularly valuable if you're dealing with workloads that experience sudden, unpredictable traffic spikes and need to scale up immediately - think scenarios like flash sales, breaking news events, or user-facing applications with strict SLA requirements. However, this isn't a one-size-fits-all solution. If your workloads are fairly static or you can tolerate the 5-10 minute wait for new nodes to provision, you probably don't need this. The buffer comes at an additional cost since you're running nodes that aren't doing productive work, so you need to weigh whether the improved responsiveness justifies the extra spend for your specific use case. If you do decide this approach fits your needs, remember to keep monitoring and iterating on your configuration for the best resource management. By maintaining a buffer of low priority pods, you can address resource scarcity before it becomes a problem, reduce latency, and provide a much better experience for your users. This approach will make your cluster more responsive and free up your operational capacity to focus on improving services instead of constantly firefighting resource issues.173Views0likes0CommentsBuilding Agents on Azure Container Apps with Goose AI Agent, Ollama and gpt-oss
Azure Container Apps (ACA) is redefining how developers build and deploy intelligent agents. With serverless scale, GPU-on-demand, and enterprise-grade isolation, ACA provides the ideal foundation for hosting AI agents securely and cost-effectively. Last month we highlighted how you can deploy n8n on Azure Container Apps to go from click-to-build to a running AI based automation platform in minutes, with no complex setup or infrastructure management overhead. In this post, we’re extending that same simplicity to AI agents, where we’ll show why Azure Container Apps is the best platform for running open-source agentic frameworks like Goose. Whether you’re experimenting with open-source models or building enterprise-grade automation, ACA gives you the flexibility and security you need. Challenges when building and hosting AI agents Building and running AI agents in production presents its own set of challenges. These systems often need access to proprietary data and internal APIs, making security and data governance critical, especially when agents interact dynamically with multiple tools and models. At the same time, developers need flexibility to experiment with different frameworks without introducing operational overhead or losing isolation. Simplicity and performance are also key. Managing scale, networking, and infrastructure can slow down iteration, while separating the agent’s reasoning layer from its inference backend can introduce latency and added complexity from managing multiple services. In short, AI agent development requires security, simplicity, and flexibility to ensure reliability and speed at scale. Why ACA and serverless GPUs for hosting AI agents Azure Container Apps provide a secure, flexible, and developer-friendly platform for hosting AI agents and inference workloads side by side within the same ACA environment. This unified setup gives you centralized control over network policies, RBAC, observability, and more, while ensuring that both your agentic logic and model inference run securely within one managed boundary. ACA also provides the following key benefits: Security and data governance: Your agent runs in your private, fully isolated environment, with complete control over identity, networking, and compliance. Your data never leaves the boundaries of your container Serverless economics: Scale automatically to zero when idle, pay only for what you use — no overprovisioning, no wasted resources. Developer simplicity: One-command deployment, integrated with Azure identity and networking. No extra keys, infrastructure management, or manual setup are required. Inferencing flexibility with serverless GPUs: Bring any open-source, community, or custom model. Run your inferencing apps on serverless GPUs alongside your agentic applications within the same environment. For example, running gpt-oss models via Ollama inside ACA containers avoids costly hosted inference APIs and keeps sensitive data private. These capabilities let teams focus on innovation, not infrastructure, making ACA a natural choice for building intelligent agents. Deploy the Goose AI Agent to ACA The Goose AI Agent, developed by Block, is an open source, general-purpose agent framework designed for quick deployment and easy customization. Out of the box, it supports many features like email integration, github interactions, and local CLI and system tool access. It’s great for building ready-to-run AI assistants that can connect to other systems while having a modular design that makes customization simple on top of supporting great defaults out the box. By deploying Goose on ACA, you gain all the benefits of serverless scale, secure isolation, GPU-on-demand, while maintaining the ability to customize and iterate quickly. Get started: Deploy Goose on Azure Container Apps using this open-source starter template. In just a few minutes, you’ll have a private, self-contained AI agent running securely on Azure Container Apps, ready to handle real-world workloads without compromise. Goose running on Azure Container Apps adding some content to a README, submitting a PR and sending a summary email to the team. Additional Benefits of running Goose on ACA Running the Goose AI Agent on Azure Container Apps (ACA) showcases how simple and powerful hosting AI agents can be. Always available: Goose can run continuously—handling long-lived or asynchronous workloads for hours or days—without tying up your local machine. Cost efficiency: ACA’s pay-per-use, serverless GPU model eliminates high per-call inference costs, making it ideal for sustained or compute-intensive workloads. Seamless developer experience: The Goose-on-ACA starter template sets up everything for you—model server, web UI, and CLI endpoints—with no manual configuration required. With ACA, you can go from concept to a fully running agent in minutes, without compromising on security, scalability, or cost efficiency. Part of a Growing Ecosystem of Agentic frameworks on ACA ACA is quickly becoming the go-to platform for containerized AI and Agentic workloads. From n8n, Goose to other emerging open-source and commercial agent frameworks, developers can use ACA to experiment, scale, and secure their agents - all while taking advantage of serverless scale, GPU-on-demand, and complete network isolation. It’s the same developer-first workflow that powers modern applications, now extended to intelligent agents. Whether you’re building a single agent or an entire automation ecosystem, ACA provides the flexibility and reliability you need to innovate faster.176Views0likes0CommentsActive Directory Unable to reset user passwords
I am managing a Windows Server 2025 Active Directory environment with client machines. I created a test user and enabled the option “User must change password at next logon.” I then provided a temporary password to the user, expecting them to get the prompt to change it on first login. However, when the user attempts to change the password, they receive the error: “The user must change password before signing in.” My goal is that when I provide a temporary password to a user: They get the prompt to change the password at next logon. When they change it, it should not throw the “user must change password before signing in” error. I need guidance on how to achieve this so users can reset their passwords successfully.Announcing the Public Preview of Azure Container Apps Azure Monitor dashboards with Grafana
We’re thrilled to announce the public preview of Azure Container Apps Azure Monitor Dashboards with Grafana, a major step forward in simplifying observability for your apps and environments. With this new integration, you can view Grafana dashboards directly within your app or environment in the Azure portal, with no extra setup or cost required. What’s new? Azure Monitor Dashboards with Grafana bring the power of Grafana’s visualization capabilities to your Azure resources. Dashboards with Grafana enable you to create and edit Grafana dashboards directly in the Azure portal without additional cost and less administrative overhead compared to self-hosting Grafana or using managed Grafana services. For Azure Container Apps, this means you can access two new pre-built dashboards: Container App View: View key metrics like CPU usage, memory usage, request rates, replica restarts, and more. Environment View: See all your apps in one view with details like latest revision name, minimum and maximum replicas, CPU and memory allocations, and more for each app. These dashboards are designed to help you quickly identify issues, optimize performance, and ensure your applications are running smoothly. Benefits Drill into key metrics: Stop switching between multiple tools or building dashboards from scratch. Start from the environment dashboard to get a high-level view of all of your apps, then drill into individual app dashboards. Customize your views: Tailor the dashboards to your team’s needs using Grafana’s flexible visualization options. Full compatibility with open-source Grafana: Dashboards created in Azure Monitor are portable across any Grafana instance. Share dashboards across your team with Azure Role-Based Access Control (RBAC): Dashboards are native Azure resources, so you can securely share them using RBAC. Get started today For Azure Container Apps, you can experience these dashboards directly from either your environment or an individual app: Navigate to your Azure Container App environment or a specific Container App in the Azure portal. Open the Monitoring section and select the “Dashboards with Grafana (Preview)” blade. View your metrics or customize the dashboard to meet your needs. For detailed guidance, see aka.ms/aca/grafana Want more? Explore the Grafana Gallery Looking for additional customization or inspiration? Visit the Grafana Dashboard Gallery to explore thousands of community dashboards. If you prefer to use Azure Managed Grafana, here are direct links to Azure Container Apps templates: Azure / Container Apps / Container App View Azure / Container Apps / Aggregate View You can also view other published Azure dashboards here.443Views1like1CommentSimplify Image Signing and Verification with Notary Project and Trusted Signing (Public Preview)
Supply chain security has become one of the most pressing challenges for modern cloud-native applications. Every container image, Helm chart, SBOM, or AI model that flows through your CI/CD pipeline carries risk if its integrity or authenticity cannot be guaranteed. Attackers may attempt to tamper with artifacts, replace trusted images with malicious ones, or inject unverified base images into builds. Today, we’re excited to highlight how Notary Project and Trusted Signing (Public Preview) make it easier than ever to secure your container image supply chain with strong, standards-based signing and verification. Why image signing matters Image signing addresses two fundamental questions in the software supply chain: Integrity: Is this artifact exactly the same one that was originally published? Authenticity: Did this artifact really come from the expected publisher? Without clear answers, organizations risk deploying compromised images into production environments. With signing and verification in place, you can block untrusted artifacts at build time or deployment, ensuring only approved content runs in your clusters. Notary Project: A standard-based solution Notary Project is a CNCF open-source initiative that defines standards for signing and verifying OCI artifacts—including container images, SBOMs, Helm charts, and AI models. It provides a consistent, interoperable framework for ensuring artifact integrity and authenticity across different registries, platforms, and tools. Notary Project includes two key sub-projects that address different stages of the supply chain: Notation – a CLI tool designed for developers and CI/CD pipelines. It enables publishers to sign artifacts after they are built and consumers to verify signatures before artifacts are used in builds. Ratify – a verification engine that integrates with Azure policy and Azure Kubernetes Service (AKS). It enforces signature verification at deployment time, ensuring only trusted artifacts are admitted to run in the cluster. Together, Notation and Ratify extend supply chain security from the build pipeline all the way to runtime, closing critical gaps and reducing the risk of running unverified content. Trusted Signing: Simplifying certificate management Traditionally, signing workflows required managing certificates: issuing, rotating, and renewing them through services like Azure Key Vault. While this provides control, it also adds operational overhead. Trusted Signing changes the game. It offers: Zero-touch certificate lifecycle management: no manual issuance or rotation. Short-lived certificate: reducing the attack surface. Built-in timestamping support: ensuring signatures remain valid even after certificates expire. With Trusted Signing, developers focus on delivering software, not managing certificates. End-to-end scenarios Here’s how organizations can use Notary Project and Trusted Signing together: Sign in CI/CD: An image publisher signs images as part of a GitHub Actions or Azure DevOps pipeline, ensuring every artifact carries a verifiable signature. Verify in AKS: An image consumer configures Ratify and Azure Policy on an AKS cluster to enforce that only signed images can be deployed. Verify in build pipelines: Developers ensure base images and dependencies are verified before they’re used in application builds, blocking untrusted upstream components. Extend to all OCI artifacts: Beyond container images, SBOMs, Helm charts, and even AI models can be signed and verified with the same workflow. Get started To help you get started, we’ve published new documentation and step-by-step tutorials: Overview: Ensuring integrity and authenticity of container images and OCI artifacts Sign and verify images with Notation CLI and Trusted Signing Sign container images in GitHub Actions with Trusted Signing Verify signatures in GitHub Actions Verify signatures on AKS with Ratify Try it now Supply chain security is no longer optional. By combining Notary Project with the streamlined certificate management experience of Trusted Signing, you can strengthen the integrity and authenticity of every artifact in your pipeline without slowing down your teams. Start signing today and take the next step toward a trusted software supply chain.462Views2likes0CommentsPublic preview: Confidential containers on AKS
We are proud to announce the preview of confidential containers on AKS, which provides confidential computing capabilities to containerize workloads on AKS. This offering provides strong isolation at the pod-level, memory encryption, AMD SEV-SNP hardware-based attestation capabilities for containerized application code and data while in-use, building upon the existing security, scalability and resiliency benefits offered by AKS.
7.1KViews4likes1CommentWindows Server Datacenter: Azure Edition preview build 26484 now available in Azure
Hello Windows Server Insiders! We welcome you to try Windows Server vNext Datacenter: Azure Edition preview build 26484 in both Desktop experience and Core version on the Microsoft Server Operating Systems Preview offer in Azure. Azure Edition is optimized for operation in the Azure environment. For additional information, see Preview: Windows Server VNext Datacenter (Azure Edition) for Azure Automanage on Microsoft Docs. For more information about this build, see Announcing Windows Server vNext Preview Build 26484 | Microsoft Community Hub.64Views1like0CommentsAnnouncing Windows Server vNext Preview Build 26484
Announcing Windows Server vNext Preview Build 26484 Hello Windows Server Insiders! Today we are pleased to release a new build of the next Windows Server Long-Term Servicing Channel (LTSC) Preview that contains both the Desktop Experience and Server Core installation options for Datacenter and Standard editions, Annual Channel for Container Host and Azure Edition (for VM evaluation only). Branding remains, Windows Server 2025, in this preview - when reporting issues please refer to Windows Server vNext preview. If you signed up for Server Flighting, you should receive this new build automatically. What's New Windows Server Flighting is here!! If you signed up for Server Flighting, you should receive this new build automatically later today. For more information, see Welcome to Windows Insider flighting on Windows Server - Microsoft Community Hub Feedback Hub app is now available for Server Desktop users! The app should automatically update with the latest version, but if it does not, simply Check for updates in the app’s settings tab. Known Issues Download Windows Server Insider Preview (microsoft.com) Flighting: The label for this flight may incorrectly reference Windows 11. However, when selected, the package installed is the Windows Server update. Please ignore the label and proceed with installing your flight. This issue will be addressed in a future release. Available Downloads Downloads to certain countries may not be available. See Microsoft suspends new sales in Russia - Microsoft On the Issues. Windows Server Long-Term Servicing Channel Preview in ISO format in 18 languages, and in VHDX format in English only. Windows Server Datacenter Azure Edition Preview in ISO and VHDX format, English only. Microsoft Server Languages and Optional Features Preview Keys: Keys are valid for preview builds only Server Standard: MFY9F-XBN2F-TYFMP-CCV49-RMYVH Datacenter: 2KNJJ-33Y9H-2GXGX-KMQWH-G6H67 Azure Edition does not accept a key. Symbols: Available on the public symbol server – see Using the Microsoft Symbol Server. Expiration: This Windows Server Preview will expire September 15, 2026. How to Download Registered Insiders may navigate directly to the Windows Server Insider Preview download page. If you have not yet registered as an Insider, see GETTING STARTED WITH SERVER on the Windows Insiders for Business portal. We value your feedback! The most important part of the release cycle is to hear what's working and what needs to be improved, so your feedback is extremely valued. Please use the new Feedback Hub app for Windows Server if you are running a Desktop version of Server. If you are using a Core edition, or if you are unable to use the Feedback Hub app, you can use your registered Windows 10 or Windows 11 Insider device and use the Feedback Hub application. In the app, choose the Windows Server category and then the appropriate subcategory for your feedback. In the title of the Feedback, please indicate the build number you are providing feedback on as shown below to ensure that your issue is attributed to the right version: [Server #####] Title of my feedback See Give Feedback on Windows Server via Feedback Hub for specifics. The Windows Server Insiders space on the Microsoft Tech Communities supports preview builds of the next version of Windows Server. Use the forum to collaborate, share and learn from experts. For versions that have been released to general availability in market, try the Windows Server for IT Pro forum or contact Support for Business. Diagnostic and Usage Information Microsoft collects this information over the internet to help keep Windows secure and up to date, troubleshoot problems, and make product improvements. Microsoft server operating systems can be configured to turn diagnostic data off, send Required diagnostic data, or send Optional diagnostic data. During previews, Microsoft asks that you change the default setting to Optional to provide the best automatic feedback and help us improve the final product. Administrators can change the level of information collection through Settings. For details, see http://aka.ms/winserverdata. Also see the Microsoft Privacy Statement. Terms of Use This is pre-release software - it is provided for use "as-is" and is not supported in production environments. Users are responsible for installing any updates that may be made available from Windows Update. All pre-release software made available to you via the Windows Server Insider program is governed by the Insider Terms of Use.444Views1like0Comments