Hi everyone! Tyson Paul here, kicking off 2026 with this month’s “Check This Out!” (CTO!) guide. As we dive into the new year, our mission remains the same: spotlighting content that sparks your curiosity (whether you're leveling up your skills, tackling thorny issues, or uncovering fresh resources in infrastructure and security). Each month, we'll deliver bite-sized highlights of standout blog posts, handy direct links to dive deeper, and intros to hidden-gem blogs you won't want to miss. Long-time readers, you'll spot the echoes of our old “Infrastructure + Security: Noteworthy News” series here (this evolution keeps the value packed in). We're thrilled to have you along for the ride and grateful for your support from the entire Core Infrastructure and Security Tech Community blog team!
Member: TysonPaul | Microsoft Community Hub
From classroom to workforce: Helping higher ed faculty prepare students for what’s next
Team Blog: Microsoft Learn
Author: RWortmanMorris
Published: 01/15/2026
Summary: Microsoft is partnering with higher education institutions to prepare students and faculty for an AI-driven workforce. Through tools like AI Skills Navigator, Microsoft Learn for Educators, and the Microsoft Student Ambassadors program, they offer free, flexible training, credentials, and community support to develop practical AI and digital skills. These initiatives help faculty integrate AI into teaching, empower students with job-ready skills, and provide recognized certifications valued by employers. Microsoft also provides free access to Microsoft 365 and LinkedIn Premium, aiming to support lifelong learning, teaching innovation, and successful career pathways in the evolving educational landscape.
Azure Arc Portal Update: Simplifying Onboarding and Management at Scale
Team Blog: Azure Migration and Modernization
Author: MarcoB
Published: 01/16/2026
Summary: The updated Azure Arc portal streamlines onboarding and management of hybrid and multi-cloud resources. Key improvements include a redesigned landing page, guided onboarding via interactive questionnaires, and unified machine onboarding flows for greater simplicity. Navigation is reorganized for better clarity, and dashboards now offer adaptive summaries and actionable insights, transforming management tasks into intuitive actions. These enhancements aim to make Azure Arc more accessible and scalable, enabling users to efficiently manage external resources and focus on delivering business value instead of dealing with complexity.
Resolve-DnsName vs. nslookup in Windows
Team Blog: Networking
Author: JamesKehr
Published: 01/08/2026
Summary: The article compares nslookup and Resolve-DnsName for DNS troubleshooting in Windows. Nslookup is widely used but operates independently of Windows DNS client resolver, potentially causing inaccurate results due to quirks like DNS suffix handling and lack of support for modern DNS features. Resolve-DnsName, a PowerShell cmdlet, integrates with Windows DNS-CR, providing accurate results, support for DNSSEC, secure DNS, and flexible parameters. For Windows-centric troubleshooting and automation, Resolve-DnsName is recommended, while nslookup remains useful for basic queries and diagnosing DNS client issues. Understanding their differences ensures reliable DNS troubleshooting.
Data Center Quantized Congestion Notification: Scaling congestion control for RoCE RDMA in Azure
Team Blog: Azure Networking
Author: VamsiVadlamuri
Published: 01/13/2026
Summary: Microsoft Azure uses Data Center Quantized Congestion Notification (DCQCN) to enable high-throughput, low-latency RDMA-based storage across its global data centers. DCQCN, combined with Priority Flow Control, dynamically manages congestion using ECN-based feedback, ensuring reliable performance even with diverse hardware and network conditions. Azure addressed interoperability challenges between NIC generations by tuning DCQCN parameters and optimizing feedback mechanisms. As a result, Azure achieves line-rate RDMA performance, significant CPU savings, reduced latency, and near-zero packet loss, making DCQCN essential for scalable and resilient cloud storage infrastructure.
What is going on with RC4 in Kerberos?
Team Blog: Ask the Directory Services Team
Author: WillAftring
Published: 01/26/2026
Summary: Microsoft is phasing out RC4 usage in Kerberos authentication due to security concerns, with major changes starting in January 2026. RC4 will be removed as a default encryption type, and new auditing tools will help identify dependencies. Enforcement begins April 2026, with rollback options until July 2026. While DES is already removed, RC4 remains supported for critical legacy needs if properly configured. Microsoft encourages users to migrate away from RC4 and offers resources and support for environments still dependent on it.
Redis Keys Statistics
Team Blog: Azure PaaS
Author: LuisFilipe
Published: 01/21/2026
Summary: The article explains how to gather Redis key statistics, focusing on Time-to-Live (TTL) and key size, to troubleshoot cache usage and performance. It provides two Bash+LUA script solutions: one for key statistics (counting keys by TTL and size thresholds), and another for listing key names meeting specified TTL and size criteria. The article highlights the importance of managing TTL and key sizes for optimal Redis performance and warns that running these scripts can impact Redis workloads due to their need to scan all keys. Usage instructions, parameters, and performance considerations are detailed.
Azure Arc Server Jan 2026 Forum Recap
Team Blog: Azure Arc
Author: Aurnov_Chattopadhyay
Published: 01/20/2026
Summary: The January 2026 Azure Arc Server Forum highlighted new machine management features in Azure Compute Hub, updates on Windows Server Hot Patch and its billing, a preview of TPM-based onboarding to Azure Arc, and a recap of major 2025 SQL Server announcements. Attendees are encouraged to stay updated with the latest Arc agent, provide feedback, and register for SQL Con 2026. The session’s recording is available on YouTube, and registration for future forums and newsletters is open, with the next session scheduled for February 19, 2026.
Azure File Sync: Azure Arc Integration, Additional Regions, and Secure Syncing
Team Blog: Azure Storage
Author: grace_kim
Published: 01/16/2026
Summary: Azure File Sync now integrates with Azure Arc, enabling simplified deployment and management of hybrid file services. The service expands to four new regions—Italy North, New Zealand North, Poland Central, and Spain Central—offering improved regional data residency and performance. Enhanced security is provided through managed identities, eliminating the need for manual credential management. From January 2026, File Sync will incur no per-server cost for Windows Server Software Assurance customers using Azure Arc and File Sync agent v22+. These updates streamline onboarding, ensure secure access, and support scalable, predictable hybrid storage solutions.
Announcing Public Preview of User delegation SAS for Azure Tables, Azure Files, and Azure Queues
Team Blog: Azure Storage
Author: ellievail
Published: 01/16/2026
Summary: Microsoft has announced the public preview of user delegation SAS (UD SAS) for Azure Tables, Azure Files, and Azure Queues in all regions, expanding secure access beyond Azure Blobs. UD SAS ties SAS tokens to user identities via Entra ID and RBAC, enabling more granular, delegated access to storage resources. There’s no additional cost, and it’s available through REST APIs, SDKs, PowerShell, and CLI. Eligible storage accounts can use UD SAS without special settings, and setup involves assigning RBAC roles, obtaining a user delegation key, creating the SAS token, and sharing it securely.
Deploy PostgreSQL on Azure VMs with Azure NetApp Files: Production-Ready Infrastructure as Code
Team Blog: Azure Architecture
Author: GeertVanTeylingen
Published: 01/15/2026
Summary: The article details how deploying PostgreSQL on Azure VMs with Azure NetApp Files is simplified using production-ready Infrastructure as Code (IaC) templates. These templates automate setup, optimize storage performance, and enhance security, eliminating manual configuration and reducing deployment time from hours to minutes. Teams can use Terraform, ARM templates, or PowerShell for flexible, repeatable workflows across development and production environments. Key benefits include consistent environments, enterprise-grade features, rapid provisioning, cost efficiency, and support for AI/ML workloads and database migrations. The solution ensures scalable, secure, and high-performance PostgreSQL deployments on Azure.
Unlocking Advanced Data Analytics & AI with Azure NetApp Files object REST API
Team Blog: Azure Architecture
Author: GeertVanTeylingen
Published: 01/15/2026
Summary: The article details how the Azure NetApp Files object REST API enables S3-compatible object access to enterprise file data stored on Azure NetApp Files, eliminating the need for data copying or restructuring. This dual-access approach allows analytics and AI platforms, including Azure Databricks and Microsoft OneLake, to operate directly on NFS/SMB datasets, preserving performance, security, and governance. Integration scenarios, technical implementation, and video guides are provided to help organizations streamline data architectures, minimize data movement, and accelerate real-time insights across analytics and AI workflows.
Release of Bicep Azure Verified Modules for Platform Landing Zone
Team Blog: Azure Tools
Author: ztrocinski
Published: 01/20/2026
Summary: **Summary:** Microsoft has released Azure Verified Modules (AVM) for Platform Landing Zones using Bicep, providing a modular, customizable, and officially supported approach to Infrastructure as Code (IaC). The framework features 19 independently managed modules, supports full configuration, and integrates Azure Deployment Stacks for improved resource lifecycle management. Bicep AVM replaces classic ALZ-Bicep, which will be deprecated by 2027. Key benefits include end-to-end customization, faster innovation, independent policy management, and modernized parameter files, making Azure deployments more flexible, maintainable, and aligned with enterprise best practices. Migration guidance will be provided for existing users.
Improving Efficiency through Adaptive CPU Uncore Power Management
Team Blog: Azure Compute
Author: PulkitMisra
Published: 01/21/2026
Summary: The article discusses Microsoft Azure’s adoption of adaptive CPU uncore power management, focusing on Efficiency Latency Control (ELC) co-designed with Intel for Xeon 6 processors. ELC enables dynamic adjustment of uncore frequency based on CPU utilization, improving power efficiency without sacrificing performance. Real-world tests show up to 11% power savings at moderate loads and 1.5× performance-per-watt improvements at low loads. This approach allows Azure to deploy more servers within existing datacenter power constraints, enhancing sustainability and responsiveness to evolving cloud workload demands through hardware–software co-design.
Announcing General Availability of Azure Da/Ea/Fasv7-series VMs based on AMD ‘Turin’ processors
Team Blog: Azure Compute
Author: ArpitaChatterjee
Published: 01/27/2026
Summary: Microsoft has announced the general availability of Azure’s new AMD-based Da/Ea/Fasv7-series Virtual Machines powered by 5th Gen AMD EPYC ‘Turin’ processors. These VMs offer improved CPU performance, scalability, memory capacity, network, and storage throughput, with up to 35% better price-performance than previous AMD v6 VMs. They cater to diverse workloads, including general, memory, and compute-intensive tasks, and feature enhanced security and flexible configurations. Available across multiple Azure regions, these VMs deliver significant workload-specific gains and are praised by customers and technology partners for performance and efficiency improvements.
Determine Defender for Endpoint offboarding state for Linux devices
Team Blog: Core Infrastructure and Security
Author: edgarus71
Published: 01/21/2026
Summary: The article describes a method for quickly determining the Microsoft Defender for Endpoint onboarding or offboarding state on Linux devices. Since the Defender portal can take up to 7 days to update offboarding status, a provided Bash script checks key indicators such as the onboarding file, Defender package installation, and service status. The script outputs whether the device is "ONBOARDED" or "OFFBOARDED," streamlining endpoint management and troubleshooting. It can be deployed at scale via Linux management tools and also run remotely from the Live Response console for onboarded devices.
Conditional Access for Agent Identities in Microsoft Entra
Team Blog: Core Infrastructure and Security
Author: Farooque
Published: 01/27/2026
Summary: Microsoft Entra introduces Agent Identities for AI systems and extends Conditional Access to them, but with limited controls compared to human users. Currently, Conditional Access only allows blocking agent identities and assessing agent risk during token acquisition, without supporting MFA, device compliance, or session controls. This is due to agents’ machine-driven authentication methods. Despite limitations, Conditional Access helps prevent compromised agents, enforce separation of duties, and manage AI sprawl. Agent Blueprints are not governed by Conditional Access. Future enhancements are expected, but for now, CA remains a minimal, identity-focused security layer for AI agents.
Announcing Azure CycleCloud Workspace for Slurm: Version 2025.12.01 Release
Team Blog: Azure High Performance Computing (HPC)
Author: xpillons
Published: 01/07/2026
Summary: The 2025.12.01 release of Azure CycleCloud Workspace for Slurm introduces integrated Prometheus monitoring with managed Grafana dashboards, Entra ID Single Sign-On for secure authentication, support for ARM64 compute nodes, and compatibility with Ubuntu 24.04 and AlmaLinux 9. These enhancements streamline HPC cluster management, improve security, and offer real-time performance insights, empowering technical teams to build scalable and efficient environments. The update simplifies monitoring setup and user access, reinforcing Azure’s commitment to flexible, secure, and innovative HPC solutions for scientific and technical communities.
Scaling physics-based digital twins: Neural Concept on Azure delivers a New Record in Industrial AI
Team Blog: Azure High Performance Computing (HPC)
Author: lmiroslaw
Published: 01/12/2026
Summary: Neural Concept, leveraging Azure HPC infrastructure, achieved record-breaking accuracy and efficiency in automotive aerodynamic predictions using MIT’s DrivAerNet++ dataset. Their geometry-native Geometric Regressor outperformed all previous methods in predicting surface pressure, wall shear stress, velocity fields, and drag coefficients. The workflow transformed 39TB of CFD data into a production-ready model within a week, enabling real-time predictions and significantly shortening design cycles. Customers have realized up to 30% faster development and $20M savings per 100,000 vehicles. This demonstrates the industrial impact of scalable, AI-driven engineering workflows in automotive design.
Intune my Macs: Accelerating macOS proof of concepts with Microsoft Intune
Team Blog: Intune Customer Success
Author: Intune_Support_Team
Published: 01/22/2026
Summary: Intune my Macs is an open-source starter kit from Microsoft that streamlines macOS management proof of concepts using Intune. It deploys over 31 recommended enterprise configurations—including security, compliance, identity, and applications—via a single PowerShell script, operating in dry-run mode by default. The project helps organizations quickly evaluate and implement Intune for macOS, offers practical configuration examples, reduces setup time to minutes, and includes documentation and analysis tools. It’s ideal for learning, testing, and customizing Intune policies for macOS environments, saving significant time and effort.
Silicon to Systems: How Microsoft Engineers AI Infrastructure from the Ground Up
Team Blog: Azure Infrastructure
Author: Alistair_Speirs
Published: 01/27/2026
Summary: The article details how Microsoft engineers its AI infrastructure by designing custom silicon, servers, accelerators, and data centers as an integrated system optimized for performance, power efficiency, and cost. Highlighting custom chips like Cobalt 200 and the Maia AI Accelerator platform, Microsoft emphasizes purpose-built hardware, advanced cooling solutions, and end-to-end system integration. This approach ensures reliable, efficient AI workloads at global scale, powering services like Copilot and Teams. The engineering process involves close coordination between hardware and software development, from silicon design to datacenter deployment, prioritizing power and thermal management throughout.
Deep dive into the Maia 200 architecture
Team Blog: Azure Infrastructure
Author: sdighe
Published: 01/26/2026
Summary: Maia 200 is Microsoft’s first custom AI inference accelerator, designed for efficiency and scalability in Azure. It features advanced silicon, memory hierarchy, and data movement architecture, delivering 30% better performance per dollar than previous hardware. Optimized for narrow precision arithmetic and large language models, Maia 200 supports high-throughput, low-latency inference, and integrates seamlessly with Azure’s cloud infrastructure and developer tools. Its innovative interconnect and software stack enable reliable, scalable multi-tenant AI deployments, powering workloads like GPT-5.2 in Microsoft Foundry and 365 Copilot. Maia 200 sets a new standard for cloud-native, cost-effective AI inference.