security
115 TopicsProject Pavilion Presence at KubeCon NA 2025
KubeCon + CloudNativeCon NA took place in Atlanta, Georgia, from 10-13 November, and continued to highlight the ongoing growth of the open source, cloud-native community. Microsoft participated throughout the event and supported several open source projects in the Project Pavilion. Microsoft’s involvement reflected our commitment to upstream collaboration, open governance, and enabling developers to build secure, scalable and portable applications across the ecosystem. The Project Pavilion serves as a dedicated, vendor-neutral space on the KubeCon show floor reserved for CNCF projects. Unlike the corporate booths, it focuses entirely on open source collaboration. It brings maintainers and contributors together with end users for hands-on demos, technical discussions, and roadmap insights. This space helps attendees discover emerging technologies and understand how different projects fit into the cloud-native ecosystem. It plays a critical role for idea exchanges, resolving challenges and strengthening collaboration across CNCF approved technologies. Why Our Presence Matters KubeCon NA remains one of the most influential gatherings for developers and organizations shaping the future of cloud-native computing. For Microsoft, participating in the Project Pavilion helps advance our goals of: Open governance and community-driven innovation Scaling vital cloud-native technologies Secure and sustainable operations Learning from practitioners and adopters Enabling developers across clouds and platforms Many of Microsoft’s products and cloud services are built on or aligned with CNCF and open-source technologies. Being active within these communities ensures that we are contributing back to the ecosystem we depend on and designing by collaborating with the community, not just for it. Microsoft-Supported Pavilion Projects containerd Representative: Wei Fu The containerd team engaged with project maintainers and ecosystem partners to explore solutions for improving AI model workflows. A key focus was the challenge of handling large OCI artifacts (often 500+ GiB) used in AI training workloads. Current image-pulling flows require containerd to fetch and fully unpack blobs, which significantly delays pod startup for large models. Collaborators from Docker, NTT, and ModelPack discussed a non-unpacking workflow that would allow training workloads to consume model data directly. The team plans to prototype this behavior as an experimental feature in containerd. Additional discussions included updates related to nerdbox and next steps for the erofs snapshotter. Copacetic Representative: Joshua Duffney The Copa booth attracted roughly 75 attendees, with strong representation from federal agencies and financial institutions, a sign of growing adoption in regulated industries. A lightning talk delivered at the conference significantly boosted traffic and engagement. Key feedback and insights included: High interest in customizable package update sources Demand for application-level patching beyond OS-level updates Need for clearer CI/CD integration patterns Expectations around in-cluster image patching Questions about runtime support, including Podman The conversations revealed several documentation gaps and feature opportunities that will inform Copa’s roadmap and future enablement efforts. Drasi Representative: Nandita Valsan KubeCon NA 2025 marked Drasi’s first in-person presence since its launch in October 2024 and its entry into the CNCF Sandbox in early 2025. With multiple kiosk slots, the team interacted with ~70 visitors across shifts. Engagement highlights included: New community members joining the Drasi Discord and starring GitHub repositories Meaningful discussions with observability and incident management vendors interested in change-driven architectures Positive reception to Aman Singh’s conference talk, which led attendees back to the booth for deeper technical conversations Post-event follow-ups are underway with several sponsors and partners to explore collaboration opportunities. Flatcar Container Linux Representatives: Sudhanva Huruli and Vamsi Kavuru The Flatcar project had some fantastic conversations at the pavilion. Attendees were eager to learn about bare metal provisioning, GPU support for AI workloads, and how Flatcar’s fully automated build and test process keeps things simple and developer friendly. Questions around Talos vs. Flatcar and CoreOS sparked lively discussions, with the team emphasizing Flatcar’s usability and independence from an OS-level API. Interest came from government agencies and financial institutions, and the preview of Flatcar on AKS opened the door to deeper conversations about real-world adoption. The Project Pavilion proved to be the perfect venue for authentic, technical exchanges. Flux Representatives: Dipti Pai The Flux booth was active throughout all three days of the Project Pavilion, where Microsoft joined other maintainers to highlight new capabilities in Flux 2.7, including improved multi-tenancy, enhanced observability, and streamlined cloud-native integrations. Visitors shared real-world GitOps experiences, both successes and challenges, which provided valuable insights for the project’s ongoing development. Microsoft’s involvement reinforced strong collaboration within the Flux community and continued commitment to advancing GitOps practices. Headlamp Representatives: Joaquim Rocha, Will Case, and Oleksandr Dubenko Headlamp had a booth for all three days of the conference, engaging with both longstanding users and first-time attendees. The increased visibility from becoming a Kubernetes sub-project was evident, with many attendees sharing their usage patterns across large tech organizations and smaller industrial teams. The booth enabled maintainers to: Gather insights into how teams use Headlamp in different environments Introduce Headlamp to new users discovering it via talks or hallway conversations Build stronger connections with the community and understand evolving needs Inspektor Gadget Representatives: Jose Blanquicet and Mauricio Vásquez Bernal Hosting a half-day kiosk session, Inspektor Gadget welcomed approximately 25 visitors. Attendees included newcomers interested in learning the basics and existing users looking for updates. The team showcased new capabilities, including the tcpdump gadget and Prometheus metrics export, and invited visitors to the upcoming contribfest to encourage participation. Istio Representatives: Keith Mattix, Jackie Maertens, Steven Jin Xuan, Niranjan Shankar, and Mike Morris The Istio booth continued to attract a mix of experienced adopters and newcomers seeking guidance. Technical discussions focused on: Enhancements to multicluster support in ambient mode Migration paths from sidecars to ambient Improvements in Gateway API availability and usage Performance and operational benefits for large-scale deployments Users, including several Azure customers, expressed appreciation for Microsoft’s sustained investment in Istio as part of their service mesh infrastructure. Notary Project Representative: Feynman Zhou and Toddy Mladenov The Notary Project booth saw significant interest from practitioners concerned with software supply chain security. Attendees discussed signing, verification workflows, and integrations with Azure services and Kubernetes clusters. The conversations will influence upcoming improvements across Notary Project and Ratify, reinforcing Microsoft’s commitment to secure artifacts and verifiable software distribution. Open Policy Agent (OPA) - Gatekeeper Representative: Jaydip Gabani The OPA/Gatekeeper booth enabled maintainers to connect with both new and existing users to explore use cases around policy enforcement, Rego/CEL authoring, and managing large policy sets. Many conversations surfaced opportunities around simplifying best practices and reducing management complexity. The team also promoted participation in an ongoing Gatekeeper/OPA survey to guide future improvements. ORAS Representative: Feynman Zhou and Toddy Mladenov ORAS engaged developers interested in OCI artifacts beyond container images which includes AI/ML models, metadata, backups, and multi-cloud artifact workflows. Attendees appreciated ORAS’s ecosystem integrations and found the booth examples useful for understanding how artifacts are tagged, packaged, and distributed. Many users shared how they leverage ORAS with Azure Container Registry and other OCI-compatible registries. Radius Representative: Zach Casper The Radius booth attracted the attention of platform engineers looking for ways to simplify their developer's experience while being able to enforce enterprise-grade infrastructure and security best practices. Attendees saw demos on deploying a database to Kubernetes and using managed databases from AWS and Azure without modifying the application deployment logic. They also saw a preview of Radius integration with GitHub Copilot enabling AI coding agents to autonomously deploy and test applications in the cloud. Conclusion KubeCon + CloudNativeCon North America 2025 reinforced the essential role of open source communities in driving innovation across cloud native technologies. Through the Project Pavilion, Microsoft teams were able to exchange knowledge with other maintainers, gather user feedback, and support projects that form foundational components of modern cloud infrastructure. Microsoft remains committed to building alongside the community and strengthening the ecosystem that powers so much of today’s cloud-native development. For anyone interested in exploring or contributing to these open source efforts, please reach out directly to each project’s community to get involved, or contact Lexi Nadolski at lexinadolski@microsoft.com for more information.175Views1like0CommentsAzure Virtual Desktop for Guest User / B2b Identity
All of our external customers have their own AAD / Entra ID and wish to not manage multiple identities. As we present our applications via AVD, it requires them to have a separate identity in our tenant currently. AVD should support guest accounts from another tenant to be able to sign in. Currently, per the documentation and per the ticket I just worked with Microsoft support: Azure Virtual Desktop doesn't support external identities, including guest accounts or business-to-business (B2B) identities. Whether you're serving internal commercial purposes or external users with Azure Virtual Desktop, you'll need to create and manage identities for those users yourself. Please continue development to allow guest accounts that have been invited into a tenant to sign in to AVD machines. Thanks!1.8KViews30likes9CommentsFrom Policy to Practice: Built-In CIS Benchmarks on Azure - Flexible, Hybrid-Ready
Security is more important than ever. The industry-standard for secure machine configuration is the Center for Internet Security (CIS) Benchmarks. These benchmarks provide consensus-based prescriptive guidance to help organizations harden diverse systems, reduce risk, and streamline compliance with major regulatory frameworks and industry standards like NIST, HIPAA, and PCI DSS. In our previous post, we outlined our plans to improve the Linux server compliance and hardening experience on Azure and shared a vision for integrating CIS Benchmarks. Today, that vision has turned into reality. We're now announcing the next phase of this work: Center for Internet Security (CIS) Benchmarks are now available on Azure for all Azure endorsed distros, at no additional cost to Azure and Azure Arc customers. With today's announcement, you get access to the CIS Benchmarks on Azure with full parity to what’s published by the Center for Internet Security (CIS). You can adjust parameters or define exceptions, tailoring security to your needs and applying consistent controls across cloud, hybrid, and on-premises environments - without having to implement every control manually. Thanks to this flexible architecture, you can truly manage compliance as code. How we achieve parity To ensure accuracy and trust, we rely on and ingest CIS machine-readable Benchmark content (OVAL/XCCDF files) as the source of truth. This guarantees that the controls and rules you apply in Azure match the official CIS specifications, reducing drift and ensuring compliance confidence. What’s new under the hood At the core of this update is azure-osconfig’s new compliance engine - a lightweight, open-source module developed by the Azure Core Linux team. It evaluates Linux systems directly against industry-standard benchmarks like CIS, supporting both audit and, in the future, auto-remediation. This enables accurate, scalable compliance checks across large Linux fleets. Here you can read more about azure-osconfig. Dynamic rule evaluation The new compliance engine supports simple fact-checking operations, evaluation of logic operations on them (e.g., anyOf, allOf) and Lua based scripting, which allows to express complex checks required by the CIS Critical Security Controls - all evaluated natively without external scripts. Scalable architecture for large fleets When the assignment is created, the Azure control plane instructs the machine to pull the latest Policy package via the Machine Configuration agent. Azure-osconfig’s compliance engine is integrated as a light-weight library to the package and called by Machine Configuration agent for evaluation – which happens every 15-30minutes. This ensures near real-time compliance state without overwhelming resources and enables consistent evaluation across thousands of VMs and Azure Arc-enabled servers. Future-ready for remediation and enforcement While the Public Preview starts with audit-only mode, the roadmap includes per-rule remediation and enforcement using technologies like eBPF for kernel-level controls. This will allow proactive prevention of configuration drift and runtime hardening at scale. Please reach out if you interested in auto-remediation or enforcement. Extensibility beyond CIS Benchmarks The architecture was designed to support other security and compliance standards as well and isn’t limited to CIS Benchmarks. The compliance engine is modular, and we plan to extend the platform with STIG and other relevant industry benchmarks. This positions Azure as a platform for a place where you can manage your compliance from a single control-plane without duplicating efforts elsewhere. Collaboration with the CIS This milestone reflects a close collaboration between Microsoft and the CIS to bring industry-standard security guidance into Azure as a built-in capability. Our shared goal is to make cloud-native compliance practical and consistent, while giving customers the flexibility to meet their unique requirements. We are committed to continuously supporting new Benchmark releases, expanding coverage with new distributions and easing adoption through built-in workflows, such as moving from your current Benchmark version to a new version while preserving your custom configurations. Certification and trust We can proudly announce that azure-osconfig has met all the requirements and is officially certified by the CIS for Benchmark assessment, so you can trust compliance results as authoritative. Minor benchmark updates will be applied automatically, while major version will be released separately. We will include workflows to help migrate customizations seamlessly across versions. Key Highlights Built-in CIS Benchmarks for Azure Endorsed Linux distributions Full parity with official CIS Benchmarks content and certified by the CIS for Benchmark Assessment Flexible configuration: adjust parameters, define exceptions, tune severity Hybrid support: enforce the same baseline across Azure, on-prem, and multi-cloud with Azure Arc Reporting format in CIS tooling style Supported use cases Certified CIS Benchmarks for all Azure Endorsed Distros - Audit only (L1/L2 server profiles) Hybrid / On-premises and other cloud machines with Azure Arc for the supported distros Compliance as Code (example via Github -> Azure OIDC auth and API integration) Compatible with GuestConfig workbook What’s next? Our next mission is to bring the previously announced auto-remediation capability into this experience, expand the distribution coverage and elevate our workflows even further. We’re focused on empowering you to resolve issues while honoring the unique operational complexity of your environments. Stay tuned! Get Started Documentation link for this capability Enable CIS Benchmarks in Machine Configuration and select the “Official Center for Internet Security (CIS) Benchmarks for Linux Workloads” then select the distributions for your assignment, and customize as needed. In case if you want any additional distribution supported or have any feedback for azure-osconfig – please open an Azure support case or a Github issue here Relevant Ignite 2025 session: Hybrid workload compliance from policy to practice on Azure Connect with us at Ignite Meet the Linux team and stop by the Linux on Azure booth to see these innovations in action: Session Type Session Code Session Name Date/Time (PST) Theatre THR 712 Hybrid workload compliance from policy to practice on Azure Tue, Nov 18/ 3:15 PM – 3:45 PM Breakout BRK 143 Optimizing performance, deployments, and security for Linux on Azure Thu, Nov 20/ 1:00 PM – 1:45 PM Breakout BRK 144 Build, modernize, and secure AKS workloads with Azure Linux Wed, Nov 19/ 1:30 PM – 2:15 PM Breakout BRK 104 From VMs and containers to AI apps with Azure Red Hat OpenShift Thu, Nov 20/ 8:30 AM – 9:15 AM Theatre THR 701 From Container to Node: Building Minimal-CVE Solutions with Azure Linux Wed, Nov 19/ 3:30 PM – 4:00 PM Lab Lab 505 Fast track your Linux and PostgreSQL migration with Azure Migrate Tue, Nov 18/ 4:30 PM – 5:45 PM PST Wed, Nov 19/ 3:45 PM – 5:00 PM PST Thu, Nov 20/ 9:00 AM – 10:15 AM PST746Views0likes0CommentsInnovations and Strengthening Platforms Reliability Through Open Source
The Linux Systems Group (LSG) at Microsoft is the team building OS innovations in Azure enabling secure and high-performance platforms that power millions of workloads worldwide. From providing the OS for Boost, optimizing Linux kernels for hyperscale environments or contributing to open-source projects like Rust-VMM and Cloud Hypervisor, LSG ensures customers get the best of Linux on Azure. Our work spans performance tuning, security hardening, and feature enablement for new silicon enablement and cutting-edge technologies, such as Confidential Computing, ARM64 and Nvidia Grace Blackwell all while strengthening the global open-source ecosystem. Our philosophy is simple: we develop in the open and upstream first, integrating improvements into our products after they’ve been accepted by the community. At Ignite we like to highlight a few open-source key contributions in 2025 that are the foundations for many product offerings and innovations you will see during the whole week. We helped bring seamless kernel update features (Kexec HandOver) to the Linux kernel, improved networking paths for AI platforms, strengthened container orchestration and security efforts, and shared engineering insights with global communities and conferences. This work reflects Microsoft’s long-standing commitment to open source, grounded in active upstream participation and close collaboration with partners across the ecosystem. Our engineers work side-by-side with maintainers, Linux distro partners, and silicon providers to ensure contributions land where they help the most, from kernel updates to improvements that support new silicon platforms. Linux Kernel Contributions Enabling Seamless Kernel Updates: Persistent uptime for critical services is a top priority. This year, Microsoft engineer Mike Rapoport successfully merged Kexec HandOver (KHO) into Linux 6.16 1 . KHO is a kernel mechanism that preserves memory state across a reboot (kexec), allowing systems to carry over important data when loading a new kernel. In practice, this means Microsoft can apply security patches or kernel updates to Azure platform and customers VMs without rebooting or with significantly reduced downtime. It’s a technical achievement with real impact: cloud providers and enterprises can update Linux on the fly, enhancing security and reliability for services that demand continuous availability. Optimizing Network Drivers for AI Scale: Massive AI models require massive bandwidth. Working closely with our partners deploying large AI workloads on Azure, LSG engineers delivered a breakthrough in Linux networking performance. LSG team rearchitected the receive path of the MANA network driver (used by our smart NICs) to eliminate wasted memory and enable recycling of buffers. 2x higher effective network throughput on 64 KB page systems 35% better memory efficiency for RX buffers 15% higher throughput and roughly half the memory use even on standard x86_64 VMs References MANA RX optimization patch: net: mana: Use page pool fragments for RX buffers LKML Linux Plumbers 2025 talk: Optimizing traffic receive (RX) path in Linux kernel MANA Driver for larger PAGE_SIZE systems Improving Reliability for Cloud Networking: In addition to raw performance, reliability got a boost. One critical fix addressed a race condition in the Hyper-V hv_netvsc driver that sometimes caused packet loss when a VM’s network channel initialized. By patching this upstream, we improved network stability for all Linux guests running on Hyper-V keeping customer VMs running smoothly during dynamic operations like scale-out or live migrations. Our engineers also upstreamed numerous improvements to Hyper-V device drivers (covering storage, memory, and general virtualization).We fixed interrupt handling bugs, eliminated outdated patches, and resolved issues affecting ARM64 architectures. Each of these fixes was contributed to the mainline kernel, ensuring that any Linux distribution running on Hyper-V or Azure benefits from the enhanced stability and performance. References Upstream fix: hv_netvsc race on early receive events: kernel.org commit referenced by Ubuntu bug Launchpad Ubuntu Azure backport write-up: Bug 2127705 – hv_netvsc: fix loss of early receive events from host during channel open Launchpad Older background on hv_netvsc packet-loss issues: kernel.org bug 81061 Strengthening Core Linux Infrastructure: Several of our contributions targeted fundamental kernel subsystems that all Linux users rely on. For example, we led significant enhancements to the Virtual File System (VFS) layer reworking how Linux handles process core dumps and expanding file management capabilities. These changes improve how Linux handles files and memory under the hood, benefiting scenarios from large-scale cloud storage to local development. We also continued upstream efforts to support advanced virtualization features.Our team is actively upstreaming the mshv_vtl driver (for managing secure partitions on Hyper-V) and improving Linux’s compatibility with nested virtualization on Azure’s Microsoft Hypervisor (MSHV). All this low-level work adds up to a more robust and feature-rich kernel for everyone. References Example VFS coredump work: split file coredumping into coredump_file() mshv_vtl driver patchset: Drivers: hv: Introduce new driver – mshv_vtl (v10) and v12 patch series on patchew Bolstering Linux Security in the Cloud: Security has been a major thread across our upstream contributions. One focus area is making container workloads easier to verify and control. Microsoft engineers proposed an approach for code integrity in containers built on containerd’s EROFS snapshotter, shared as an open RFC in the containerd project -GitHub. The idea is to use read-only images plus integrity metadata so that container file systems can be measured and checked against policy before they run. We also engaged deeply with industry partners on kernel vulnerability handling. Through the Cloud-LTS Linux CVE workgroup, cloud providers and vendors collaborate in the open on a shared analysis of Linux CVEs. The group maintains a public repository that records how each CVE affects various kernels and configurations, which helps reduce duplicated triage work and speeds up security responses. On the platform side, our engineers contributed fixes to the OP-TEE secure OS used in trusted execution and secure-boot scenarios, making sure that the cryptographic primitives required by Azure’s Linux boot flows behave correctly across supported devices. These changes help ensure that Linux verified boot chains remain reliable on Azure hardware. References containerd RFC: Code Integrity for OCI/containerd Containers using erofs-snapshotter GitHub Cloud-LTS public CVE analysis repo: cloud-lts/linux-cve-analysis Linux CVE workgroup session at Linux Plumbers 2025: Linux CVE workgroup OP-TEE project docs: OP-TEE documentation Developer Tools & Experience Smoother OS Management with Systemd: Ensuring Linux works seamlessly on Azure scale. The core init system systemd saw important improvements from our team this year. LSG contributed and merged upstream support for disk quota controls in systemd services. With new directives (like StateDirectoryQuota and CacheDirectoryQuota), administrators can easily enforce storage limits for service data, which is especially useful in scenarios like IoT devices with eMMC storage on Azure’s custom SoCs. In addition, Sea-Team added an auto-reload feature to systemd-journald, allowing log configuration changes to apply at runtime without restarting the logging service . These improvements, now part of upstream systemd, help Azure and other Linux environments perform updates or maintenance with minimal disruption to running services. These improvements help Azure and other environments roll out configuration updates with less impact on running workloads. References systemd quota directives: systemd.exec(5) – StateDirectoryQuota and related options systemd journald reload behavior: systemd-journald.service(8) Empowering Linux Quality at Scale: Running Linux on Azure at global scale requires extensive, repeatable testing. Microsoft continues to invest in LISA (Linux Integration Services Automation), an open-source framework that validates Linux kernels and distributions on Azure and other Hyper-V–based environments. Over the past year we expanded LISA with: New stress tests for rapid reboot sequences to catch elusive timing bugs Better failure diagnostics to make complex issues easier to root-cause Extended coverage for ARM64 scenarios and technologies like InfiniBand networking Integration of Azure VM SKU metadata and policy checks so that image validation can automatically confirm conformance to Azure requirements These changes help us qualify new kernels, distributions, and VM SKUs before they are shipped to customers. Because LISA is open source, partners and Linux vendors can run the same tests and share results, which raises quality across the ecosystem. References LISA GitHub repo: microsoft/lisa LISA documentation: Welcome to Linux Integration Services Automation LISA Documentation Community Engagement and Leadership Sharing Knowledge Globally: Open-source contribution is not just about code - it’s about people and knowledge exchange. Our team members took active roles in community events worldwide, reflecting Microsoft’s growing leadership in the Linux community. We were proud to be a Platinum Sponsor of the inaugural Open Source Summit India 2025 in Hyderabad, where LSG engineers served on the program committee and hosted technical sessions. At Linux Security Summit Europe 2025, Microsoft’s security experts shaped the agenda as program committee members, delivered talks (such as “The State of SELinux”), and even led panel discussions alongside colleagues from Intel, Arm, and others. And in Paris at Kernel Recipes 2025, our own SMEs shared kernel insights with fellow developers. By engaging in these events, Microsoft not only contributes code but also helps guide the conversation on the future of Linux. These relationships and public interactions build mutual trust and ensure that we remain closely aligned with community priorities. References Event: Open Source Summit India 2025 – Linux Foundation Paul Moore’s talk archive: LSS-EU 2025 Conference: Kernel Recipes 2025 and Kernel Recipes 2025 schedule Closing Thoughts Microsoft’s long-term commitment to open source remains strong, and the Linux Systems Group will continue contributing upstream, collaborating across the industry, and supporting the upstream communities that shape the technologies we rely on. Our work begins in upstream projects such as the Linux kernel, Kubernetes, and systemd, where improvements are shared openly before they reach Azure. The progress highlighted in this blog was made possible by the wider Linux community whose feedback, reviews, and shared ideas help refine every contribution. As we move ahead, we welcome maintainers, developers, and enterprise teams to engage with our projects, offer input, and collaborate with us. We will continue contributing code, sharing knowledge, and supporting the open-source technologies that power modern computing, working with the community to strengthen the foundation and shape a future that benefits everyone. References & Resources: Microsoft’s Open-Source Journey – Azure Blog https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/linux-and-open-source-on-azure-quarterly-update-february-2025/ba-p/4382722 Cloud Hypervisor Project Rust-VMM Community Microsoft LISA (Linux Integration Services Automation) Repository Cloud-LTS Linux CVE Analysis Project377Views1like0CommentsDalec: Declarative Package and Container Builds
Build once, deploy everywhere. From a single YAML specification, Dalec produces native Linux packages (RPM, DEB) and container images - no Dockerfiles, no complex RPM spec or control files, just declarative configuration. Dalec, a Cloud Native Computing Foundation (CNCF) Sandbox project, is a Docker BuildKit frontend that enables users to build system packages and container images from declarative YAML specifications. As a BuildKit frontend, Dalec integrates directly into the Docker build process, requiring no additional tools beyond Docker itself.285Views0likes0CommentsAzure Linux 3.0 Achieves Level 1 CIS Benchmark Certification
We’re excited to announce that Azure Linux 3.0 has successfully passed the Level 1 Center for Internet Security (CIS) benchmarks, reinforcing our commitment to delivering a secure and compliant platform for customers running Linux workloads on Azure Kubernetes Service (AKS). What is CIS? The Center for Internet Security is a nonprofit entity whose mission is to identify, develop, validate, promote, and sustain best practice solutions for cyber defense. It draws on the expertise of cybersecurity and IT professionals from government, business, and academia from around the world. To develop standards and best practices, including CIS benchmarks, controls, and hardened images, they follow a consensus decision-making model. CIS benchmarks are configuration baselines and best practices for securely configuring a system. CIS controls map to many established standards and regulatory frameworks, including the NIST Cybersecurity Framework (CSF) and NIST SP 800-53, the ISO 27000 series of standards, PCI DSS, HIPAA, and others. Each benchmark undergoes two phases of consensus review. The first occurs during initial development when experts convene to discuss, create, and test working drafts until they reach consensus on the benchmark. During the second phase, after the benchmark has been published, the consensus team reviews the feedback from the internet community for incorporation into the benchmark. CIS benchmarks provide two levels of security settings: Level 1 recommends essential basic security requirements that can be configured on any system and should cause little or no interruption of service or reduced functionality. Level 2 recommends security settings for environments requiring greater security that could result in some reduced functionality. What does this mean for Azure Linux 3.0? By meeting Level 1 requirements, Azure Linux 3.0 ensures that essential security controls are in place—helping organizations meet regulatory compliance and protect against common threats, without sacrificing performance or agility. For security and compliance-focused customers, this milestone means you can confidently deploy and scale your Linux-based applications on AKS, knowing that your foundation aligns with industry’s best practices. Azure Linux 3.0’s compliance with CIS Level 1 benchmarks support your efforts to achieve and maintain rigorous security postures, whether you’re subject to regulatory frameworks or following internal policies. How can customers try it out? We remain dedicated to making security simple. All Azure Linux 3.0 nodes on an AKS cluster will meet the Level 1 CIS benchmarks – no extra flags or parameters. Resources Visit the CIS Benchmark documentation to read a detailed list of benchmarks: Center for Internet Security (CIS) Benchmarks - Microsoft Compliance | Microsoft Learn.249Views1like0CommentsBug with Mac Remote Desktop 10.9.0, cannot remote in without manually logging in first
Now when I try to Remote Desktop into a vm that is on a domain, it will not let me connect because of a security error. This used to connect just fine. The only way to get around this is to manually log in. Once a user has logged into the computer, I can then remote into it like normal. Every time the vm is restarted though, I once again have to manually login to get remote access. The error I receive: We couldn't connect to the remote PC because of a security error. If this keeps happening, contact your network administrator for assistance. Error code: 0x18075.8KViews7likes14CommentsAdd the Networking Tab in the Host Pool Creation Wizard in the Azure Portal
Just like we have a Networking tab in the Storage Account where public access can be disabled and private endpoints enabled, there should be a similar option available during Host Pool creation in the Azure Portal. In my customer environment, which is a banking organization, a policy is enforced that does not allow any resource to be created with public access—it blocks the creation outright. az policy assignment create \ --name "DenyPublicAccess" \ --scope "/subscriptions/<subscription-id>" \ --policy "/providers/Microsoft.Authorization/policyDefinitions/<policy-definition-id>" The policy they use is named "Public network access should be disabled for PaaS services", which prevents the creation of a Host Pool unless public access is disabled. Currently, this setting cannot be configured during Host Pool creation in the Azure Portal, as the networking tab is only available after the Host Pool is created, allowing you to disable public access and enable private endpoints. For BFSI customers, requesting a policy relaxation is difficult. While this may be achieved through automation, the option should also be available in the Azure Portal. Otherwise, it creates a contradiction—there is a policy to disable public access, but no way to comply with it during the initial creation.Empowering Data Security with Azure Rights Management and Azure Information Protection
In today’s digital world, data is one of the most valuable assets a business can have. Whether it’s customer information, financial records, or internal documents, keeping that data safe is absolutely necessary. As more companies move to cloud-based systems and work in hybrid environments, the need for smart and reliable data protection tools is growing fast. That’s where Azure Rights Management (RMS) and Azure Information Protection (AIP) come in. These tools help businesses organize, label, and secure their data across different platforms, making sure it stays protected no matter where it goes. Understanding Azure Rights Management (RMS) Azure RMS is a cloud-based service designed to safeguard digital information through encryption, identity, and authorization policies. It ensures that data remains protected regardless of where it resides—on a local device, in the cloud, or in transit. Core Protection Workflow The Azure RMS protection process is straightforward yet powerful: Encryption: When a user initiates protection, the content is encrypted using strong cryptographic standards. Policy Attachment: An access policy is embedded within the file, defining what actions are permitted (e.g., read-only, no print, no forward). Authentication: Access is granted only after successful authentication via Azure Active Directory (Azure AD). Decryption and Enforcement: Once authenticated, the file is decrypted and the access policy is enforced in real time. Encryption Standards in Use Azure RMS employs: AES 128-bit and 256-bit encryption for securing documents. RSA 2048-bit encryption for protecting customer-specific root keys. These standards ensure that even if data is intercepted, it remains unreadable and unusable without proper authorization. Azure Information Protection: Beyond Encryption While Azure RMS focuses on securing content, Azure Information Protection (AIP) adds a layer of intelligence through classification and labeling. AIP enables organizations to define and apply sensitivity labels that reflect the value and confidentiality of their data. From Classic to Unified Labeling Microsoft has transitioned from the classic AIP client to the Unified Labeling Client, which integrates directly with Microsoft 365 compliance solutions. This shift simplifies management and enhances compatibility with modern Office applications. Sensitivity Labels in Action Sensitivity labels help organizations manage data access and usage by categorizing content into levels such as: Public: Safe for public distribution. General: Internal use only. Confidential: Restricted to specific internal groups. Highly Confidential: Limited to named individuals with strict usage controls (e.g., no printing or downloading). Labels can be applied manually by users or automatically based on content inspection, context, or metadata. Built-In Labeling in Office Apps Modern Office apps now support built-in labeling, eliminating the need for separate add-ins. This native integration ensures a smoother user experience and reduces the risk of compatibility issues or performance degradation. Licensing Overview To leverage AIP features, organizations must have the appropriate licensing: Office 365 E3 and above: Basic classification and labeling. AIP Plan 1: Included in Microsoft 365 E3 and EMS E3. AIP Plan 2: Included in Microsoft 365 E5 and EMS E5, offering advanced capabilities like automatic labeling and document tracking. Real-World Use Cases Access Control: Limit access to sensitive documents based on user roles or departments. Version Management: Use labels to distinguish between draft and final versions. Automated Workflows: Trigger encryption or archiving when documents reach a certain sensitivity level. Why Azure Information Protection Matters Implementing AIP brings a host of benefits: Persistent Protection: Data remains secure even when shared externally or accessed offline. Granular Control: Define who can access data and what they can do with it. Visibility and Auditing: Monitor access patterns and revoke access if needed. Hybrid Compatibility: Protect data across cloud and on-premises environments using the Rights Management connector. Centralized Management: Streamline policy creation and enforcement across the organization. Conclusion Azure RMS and AIP together form a powerful duo for modern data protection. By combining encryption, identity management, and intelligent labeling, organizations can confidently secure their most valuable asset information while enabling seamless collaboration and compliance.82Views0likes0Comments