containers
352 TopicsAnnouncing Windows Server vNext Preview Build 26484
Announcing Windows Server vNext Preview Build 26484 Hello Windows Server Insiders! Today we are pleased to release a new build of the next Windows Server Long-Term Servicing Channel (LTSC) Preview that contains both the Desktop Experience and Server Core installation options for Datacenter and Standard editions, Annual Channel for Container Host and Azure Edition (for VM evaluation only). Branding remains, Windows Server 2025, in this preview - when reporting issues please refer to Windows Server vNext preview. If you signed up for Server Flighting, you should receive this new build automatically. What's New Windows Server Flighting is here!! If you signed up for Server Flighting, you should receive this new build automatically later today. For more information, see Welcome to Windows Insider flighting on Windows Server - Microsoft Community Hub Feedback Hub app is now available for Server Desktop users! The app should automatically update with the latest version, but if it does not, simply Check for updates in the app’s settings tab. Known Issues Download Windows Server Insider Preview (microsoft.com) Flighting: The label for this flight may incorrectly reference Windows 11. However, when selected, the package installed is the Windows Server update. Please ignore the label and proceed with installing your flight. This issue will be addressed in a future release. Available Downloads Downloads to certain countries may not be available. See Microsoft suspends new sales in Russia - Microsoft On the Issues. Windows Server Long-Term Servicing Channel Preview in ISO format in 18 languages, and in VHDX format in English only. Windows Server Datacenter Azure Edition Preview in ISO and VHDX format, English only. Microsoft Server Languages and Optional Features Preview Keys: Keys are valid for preview builds only Server Standard: MFY9F-XBN2F-TYFMP-CCV49-RMYVH Datacenter: 2KNJJ-33Y9H-2GXGX-KMQWH-G6H67 Azure Edition does not accept a key. Symbols: Available on the public symbol server – see Using the Microsoft Symbol Server. Expiration: This Windows Server Preview will expire September 15, 2026. How to Download Registered Insiders may navigate directly to the Windows Server Insider Preview download page. If you have not yet registered as an Insider, see GETTING STARTED WITH SERVER on the Windows Insiders for Business portal. We value your feedback! The most important part of the release cycle is to hear what's working and what needs to be improved, so your feedback is extremely valued. Please use the new Feedback Hub app for Windows Server if you are running a Desktop version of Server. If you are using a Core edition, or if you are unable to use the Feedback Hub app, you can use your registered Windows 10 or Windows 11 Insider device and use the Feedback Hub application. In the app, choose the Windows Server category and then the appropriate subcategory for your feedback. In the title of the Feedback, please indicate the build number you are providing feedback on as shown below to ensure that your issue is attributed to the right version: [Server #####] Title of my feedback See Give Feedback on Windows Server via Feedback Hub for specifics. The Windows Server Insiders space on the Microsoft Tech Communities supports preview builds of the next version of Windows Server. Use the forum to collaborate, share and learn from experts. For versions that have been released to general availability in market, try the Windows Server for IT Pro forum or contact Support for Business. Diagnostic and Usage Information Microsoft collects this information over the internet to help keep Windows secure and up to date, troubleshoot problems, and make product improvements. Microsoft server operating systems can be configured to turn diagnostic data off, send Required diagnostic data, or send Optional diagnostic data. During previews, Microsoft asks that you change the default setting to Optional to provide the best automatic feedback and help us improve the final product. Administrators can change the level of information collection through Settings. For details, see http://aka.ms/winserverdata. Also see the Microsoft Privacy Statement. Terms of Use This is pre-release software - it is provided for use "as-is" and is not supported in production environments. Users are responsible for installing any updates that may be made available from Windows Update. All pre-release software made available to you via the Windows Server Insider program is governed by the Insider Terms of Use.112Views1like0CommentsSecuring Cloud Shell Access to AKS
Azure Cloud Shell is an online shell hosted by Microsoft that provides instant access to a command-line interface, enabling users to manage Azure resources without needing local installations. Cloud Shell comes equipped with popular tools and programming languages, including Azure CLI, PowerShell, and the Kubernetes command-line tool (kubectl). Using Cloud Shell can provide several benefits for administrators who need to work with AKS, especially if they need quick access from anywhere, or are in locked down environments: Immediate Access: There’s no need for local setup; you can start managing Azure resources directly from your web browser. Persistent Storage: Cloud Shell offers a file share in Azure, keeping your scripts and files accessible across multiple sessions. Pre-Configured Environment: It includes built-in tools, saving time on installation and configuration. The Challenge of Connecting to AKS By default, Cloud Shell traffic to AKS originates from a random Microsoft-managed IP address, rather than from within your network. As a result, the AKS API server must be publicly accessible with no IP restrictions, which poses a security risk as anyone on the internet can attempt to reach it. While credentials are still required, restricting access to the API server significantly enhances security. Fortunately, there are ways to lock down the API server while still enabling access via Cloud Shell, which we’ll explore in the rest of this article Options for Securing Cloud Shell Access to AKS Several approaches can be taken to secure the access to your AKS cluster while using Cloud Shell: IP Allow Listing On AKS clusters with a public API server, it is possible to lock down access to the API server with an IP allow list. Each Cloud Shell instance has a randomly selected outbound IP coming from the Azure address space whenever a new session is deployed. This means we cannot allow access to these IPs in advance, but we apply them once our session is running and this will work for the duration of our session. Below is an example script that you could run from Cloud Shell to check the current outbound IP address and allow it on your AKS clusters authorised IP list. #!/usr/bin/env bash set -euo pipefail RG="$1"; AKS="$2" IP="$(curl -fsS https://api.ipify.org)" echo "Adding ${IP} to allow list" CUR="$(az aks show -g "$RG" -n "$AKS" --query "apiServerAccessProfile.authorizedIpRanges" -o tsv | tr '\t' '\n' | awk 'NF')" NEW="$(printf "%s\n%s/32\n" "$CUR" "$IP" | sort -u | paste -sd, -)" if az aks update -g "$RG" -n "$AKS" --api-server-authorized-ip-ranges "$NEW" >/dev/null; then echo "IP ${IP} applied successfully"; else echo "Failed to apply IP ${IP}" >&2; exit 1; fi This method comes with some caveats: The users running the script would need to be granted permissions to update the authorised IP ranges in AKS - this permission could be used to add any IP address This script will need to be run each time a Cloud Shell session is created, and can take a few minutes to run The script only deals with adding IPs to the allow list, you would also need to implement a process to remove these IPs on a regular basis to avoid building up a long list of IPs that are no longer needed. Adding Cloud Shell IPs in bulk, through Service Tags or similar will result in your API server being accessible to a much larger range of IP addresses, and should be avoided. Command Invoke Azure provides a feature known as Command Invoke that allows you to send commands to be run in AKS, without the need for direct network connectivity. This method executes a container within AKS to run your command and then return the result, and works well from within Cloud Shell. This is probably the simplest approach that works with a locked down API server and the quickest to implement. However, there are some downsides: Commands take longer to run - when you execute the command, it needs to run a container in AKS, execute the command and then return the result. You only get exitCode and text output, and you lose API level details. All commands must be run within the context of the az aks command invoke CLI command, making commands much longer and complex to execute, rather than direct access with Kubectl Command Invoke can be a practical solution for occasional access to AKS, especially when the cost or complexity of alternative methods isn't justified. However, its user experience may fall short if relied upon as a daily tool. Further Details: Access a private Azure Kubernetes Service (AKS) cluster using the command invoke or Run command feature - Azure Kubernetes Service | Microsoft Learn Cloud Shell vNet Integration It is possible to deploy Cloud Shell into a virtual network (vNet), allowing it to route traffic via the vNet, and so access resources using private network, Private Endpoints, or even public resources, but using a NAT Gateway or Firewall for consistent outbound IP address. This approach uses Azure Relay to provide secure access to the vNet from Cloud Shell, without the need to open additional ports. When using Cloud Shell in this way, it does introduce additional cost for the Azure Relay service. Using this solution will require two different approaches, depending on whether you are using a private or public API server. When using a Private API server, which is either directly connected to the vNet, or configured with Private Endpoints, Cloud Shell will be able to connect directly to the private IP of this service over the vNet When using a Public API server, with a public IP, traffic for this will still leave the vNet and go to the internet. The benefit is that we can control the public IP used for the outbound traffic using a Nat Gateway or Azure Firewall. Once this is configured, we can then allow-list this fixed IP on the AKS API server authorised IP ranges. Further Details: Use Cloud Shell in an Azure virtual network | Microsoft Learn Azure Bastion Azure Bastion provides secure and seamless RDP and SSH connectivity to your virtual machines (VMs) directly from the Azure portal, without exposing them to the public internet. Recently, Bastion has also added support for direct connection to AKS with SSH, rather than needing to connect to a jump box and then use Kubectl from there. This greatly simplifies connecting to AKS, and also reduces the cost. Using this approach, we can deploy a Bastion into the vNet hosting AKS. From Cloud Shell we can then use the following command to create a tunnel to AKS. az aks bastion --name <aks name> --resource-group <resource group name> --bastion <bastion resource ID> Once this tunnel is connected, we can run Kubectl commands without any need for further configuration. As with Cloud Shell network integration, we take two slightly different approaches depending on whether the API server is public or private: When using a Private API server, which is either directly connected to the vNet, or configured with Private Endpoints, Cloud Shells connected via Bastion will be able to connect directly to the private IP of this service over the vNet When using a Public API server, with a public IP, traffic for this will still leave the vNet and go to the internet. As with Cloud Shell vNet integration, we can configure this to use a static outbound IP and allow list this on the API server. Using Bastion, we can still use NAT Gateway or Azure Firewall to achieve this, however you can also allow list the public IP assigned to the Bastion, removing the cost for NAT Gateway or Azure Firewall if these are not required for anything else. Connecting to AKS directly from Bastion requires the use of the Standard for Premium SKU of Bastion, which does have additional cost over the Developer or Basic SKU. This feature also requires that you enable native client support. Further details: Connect to AKS Private Cluster Using Azure Bastion (Preview) - Azure Bastion | Microsoft Learn Summary of Options IP Allow Listing The outbound IP addresses for Cloud Shell instances can be added to the Authorised IP list for your API server. As these IPs are dynamically assigned to sessions they would need to be added at runtime, to avoid adding a large list of IPs and reducing security. This can be achieved with a script. While easy to implement, this requires additional time to run the script with every new session, and increases the overhead for managing the Authorise IP list to remove unused IPs. Command Invoke Command Invoke allows you to run commands against AKS without requiring direct network access or any setup. This is a convenient option for occasional tasks or troubleshooting, but it’s not designed for regular use due to its limited user experience and flexibility. Cloud Shell vNet Integration This approach connects Cloud Shell directly to your virtual network, enabling secure access to AKS resources. It’s well-suited for environments where Cloud Shell is the primary access method and offers a more secure and consistent experience than default configurations. It does involve additional cost for Azure Relay. Azure Bastion Azure Bastion provides a secure tunnel to AKS that can be used from Cloud Shell or by users running the CLI locally. It offers strong security by eliminating public exposure of the API server and supports flexible access for different user scenarios, though it does require setup and may incur additional cost. Cloud Shell is a great tool for providing pre-configured, easily accessible CLI instances, but in the default configuration it can require some security compromises. With a little work, it is possible to make Cloud Shell work with a more secure configuration that limits how much exposure is needed for your AKS API server.155Views1like0CommentsAnnouncing Windows Server vNext Preview Build 26470
Announcing Windows Server vNext Preview Build 26470 Hello Windows Server Insiders! Today we are pleased to release a new build of the next Windows Server Long-Term Servicing Channel (LTSC) Preview that contains both the Desktop Experience and Server Core installation options for Datacenter and Standard editions, Annual Channel for Container Host and Azure Edition (for VM evaluation only). Branding remains, Windows Server 2025, in this preview - when reporting issues please refer to Windows Server vNext preview. If you signed up for Server Flighting, you should receive this new build automatically. What's New Rack Level Nested Mirror (RLNM) for S2D Campus Cluster Rack Level Nested Mirror (RLNM) for S2D Campus Cluster enables customers to meet NIS2 two data room requirements for their factories by providing fast and resilient storage using Storage Spaces Direct (S2D). For S2D Campus Cluster, we recommend using all-flash storage (SSD or NVMe drives), all capacity (no cache drives), and RDMA NICs (iWARP, RoCE, or InfiniBand). Note: Rack fault domains must be created on the cluster in order to use this feature – a new cluster must be created: #Create a test cluster but do not create storage: New-Cluster -Name TestCluster -Node Node1, Node2, Node3, Node4 -NoStorage #Define the fault domains for the cluster – two nodes are in “Room1” and two nodes are in “Room2”: Set-ClusterFaultDomain -XML @”<Topology><Site Name=”Redmond”><Rack Name=”Room1”><Node Name=”Node1”/><Node Name=”Node2”/></Rack><Rack Name=”Room2”><Node Name=”Node3”/> <Node Name=”Node4”/></Rack></Site></Topology>”@ #Add Storage Spaces Direct (S2D) Storage to the cluster – note that Enable-ClusterS2D cmdlet can also be used: Enable-ClusterStorageSpacesDirect #Check that the Storage Pool’s FaultDomainAwareness property is set to StorageRack: Get-storagepool -FriendlyName <S2DStoragePool> | fl #Create a four-copy volume on the storage pool: New-Volume -FriendlyName “FourCopyVolume” -StoragePoolFriendlyName S2D* -FileSystem CSVFS_ReFS –Size 500GB -PhysicalDiskRedundancy 3 -ProvisioningType Fixed -NumberOfDataCopies 4 –NumberOfColumns 3 #Create a four-copy volume on the storage pool, thinly provisioned: New-Volume -FriendlyName “FourCopyVolume” -StoragePoolFriendlyName S2D* -FileSystem CSVFS_ReFS –Size 500GB -PhysicalDiskRedundancy 3 –ProvisioningType Thin -NumberOfDataCopies 4 –NumberOfColumns 3 Windows Server Flighting is here!! If you signed up for Server Flighting, you should receive this new build automatically later today. For more information, see Welcome to Windows Insider flighting on Windows Server - Microsoft Community Hub Feedback Hub app is now available for Server Desktop users! The app should automatically update with the latest version, but if it does not, simply Check for updates in the app’s settings tab. Known Issues Download Windows Server Insider Preview (microsoft.com) Flighting: The label for this flight may incorrectly reference Windows 11. However, when selected, the package installed is the Windows Server update. Please ignore the label and proceed with installing your flight. This issue will be addressed in a future release. Available Downloads Downloads to certain countries may not be available. See Microsoft suspends new sales in Russia - Microsoft On the Issues. Windows Server Long-Term Servicing Channel Preview in ISO format in 18 languages, and in VHDX format in English only. Windows Server Datacenter Azure Edition Preview in ISO and VHDX format, English only. Microsoft Server Languages and Optional Features Preview Keys: Keys are valid for preview builds only Server Standard: MFY9F-XBN2F-TYFMP-CCV49-RMYVH Datacenter: 2KNJJ-33Y9H-2GXGX-KMQWH-G6H67 Azure Edition does not accept a key. Symbols: Available on the public symbol server – see Using the Microsoft Symbol Server. Expiration: This Windows Server Preview will expire September 15, 2026. How to Download Registered Insiders may navigate directly to the Windows Server Insider Preview download page. If you have not yet registered as an Insider, see GETTING STARTED WITH SERVER on the Windows Insiders for Business portal. We value your feedback! The most important part of the release cycle is to hear what's working and what needs to be improved, so your feedback is extremely valued. Please use the new Feedback Hub app for Windows Server if you are running a Desktop version of Server. If you are using a Core edition, or if you are unable to use the Feedback Hub app, you can use your registered Windows 10 or Windows 11 Insider device and use the Feedback Hub application. In the app, choose the Windows Server category and then the appropriate subcategory for your feedback. In the title of the Feedback, please indicate the build number you are providing feedback on as shown below to ensure that your issue is attributed to the right version: [Server #####] Title of my feedback See Give Feedback on Windows Server via Feedback Hub for specifics. The Windows Server Insiders space on the Microsoft Tech Communities supports preview builds of the next version of Windows Server. Use the forum to collaborate, share and learn from experts. For versions that have been released to general availability in market, try the Windows Server for IT Pro forum or contact Support for Business. Diagnostic and Usage Information Microsoft collects this information over the internet to help keep Windows secure and up to date, troubleshoot problems, and make product improvements. Microsoft server operating systems can be configured to turn diagnostic data off, send Required diagnostic data, or send Optional diagnostic data. During previews, Microsoft asks that you change the default setting to Optional to provide the best automatic feedback and help us improve the final product. Administrators can change the level of information collection through Settings. For details, see http://aka.ms/winserverdata. Also see the Microsoft Privacy Statement. Terms of Use This is pre-release software - it is provided for use "as-is" and is not supported in production environments. Users are responsible for installing any updates that may be made available from Windows Update. All pre-release software made available to you via the Windows Server Insider program is governed by the Insider Terms of Use.493Views2likes0CommentsPrivate Pod Subnets in AKS Without Overlay Networking
When deploying AKS clusters, a common concern is the amount of IP address space required. If you are deploying your AKS cluster into your corporate network, the size of the IP address space you can obtain may be quite small, which can cause problems with the number of pods you are able to deploy. The simplest and most common solution to this is to use an overlay network, which is fully supported in AKS. In an overlay network, pods are deployed to a private, non-routed address space that can be as large as you want. Translation between the routable and non-routed network is handled by AKS. For most people, this is the best option for dealing with IP addressing in AKS, and there is no need to complicate things further. However, there are some limitations with overlay networking, primarily that you cannot address the pods directly from the rest of the network— all inbound communication must go via services. There are also some advanced features that are not supported, such as Virtual Nodes. If you are in a scenario where you need some of these features, and overlay networking will not work for you, it is possible to use the more traditional vNet-based deployment method, with some tweaks. Azure CNI Pod Subnet The alternative to using the Azure CNI Overlay is to use the Azure CNI Pod Subnet. In this setup, you deploy a vNet with two subnets - one for your nodes and one for pods. You are in control of the IP address configuration for these subnets. To conserve IP addresses, you can create your pod subnet using an IP range that is not routable to the rest of your corporate network, allowing you to make it as large as you like. The node subnet remains routable from your corporate network. In this setup, if you want to talk to the pods directly, you would need to do so from within the AKS vNet or peer another network to your pod subnet. You would not be able to address these pods from the rest of your corporate network, even without using overlay networking. The Routing Problem When you deploy a setup using Azure CNI Pod Subnet, all the subnets in the vNet are configured with routes and can talk to each other. You may wish to connect this vNet to other Azure vNets via peering, or to your corporate network using ExpressRoute or VPN. However, where you will encounter an issue is if your pods try to connect to resources outside of your AKS vNet but inside your corporate network, or any peered Azure vNets (which are not peered to this isolated subnet). In this scenario, the pods will route their traffic directly out of the vNet using their private IP address. This private IP is not a valid, routable IP, so the resources on the other network will not be able to reply, and the request will fail. IP Masquerading To resolve this issue, we need a way to have traffic going to other networks present a private IP that is routable within the network. This can be achieved through several methods. One method would be to introduce a separate solution for routing this traffic, such as Azure Firewall or another Network Virtual Appliance (NVA). Traffic is routable between the pod and node subnet, so the pod can send its requests to the firewall, and then the requests to the remote network come from the IP of the firewall, which is routable. This solution will work but does require another resource to be deployed, with additional costs. If you are already using an Azure Firewall for outbound traffic, then this may be something you could use, but we are looking for a simpler and more cost-effective solution. Rather than implementing another device to present a routable IP, we can use the nodes of our AKS clusters. The AKS nodes are in the routable node subnet, so ideally we want our outbound traffic from the pods to use the node IP when it needs to leave the vNet to go to the rest of the private network. There are several different ways you could achieve this goal. You could look at using Egress Gateway services through tools like Istio, or you could look at making changes to the iptables configuration on the nodes using a DaemonSet. In this article, we will focus on using a tool called ip-masq-agent-v2. This tool provides a means for traffic to "masquerade" as coming from the IP address of the node it is running on and have the node perform Network Address Translation (NAT). If you deploy a cluster with an overlay network, this tool is already deployed and configured on your cluster. This is the tool that Microsoft uses to configure NAT for traffic leaving the overlay network. When using pod subnet clusters, this tool is not deployed, but you can deploy it yourself to provide the same functionality. Under the hood, this tool is making changes to iptables using a DaemonSet that runs on each node, so you could replicate this behaviour yourself—but this provides a simpler process that has been tested with AKS through overlay networking. The Microsoft v2 version of this is based on the original Kubernetes contribution, aiming to solve more specific networking cases, allow for more configuration options, and improve observability. Deploy ip-masq-agent-v2 There are two parts to deploying the agent. First, we deploy the agent, which runs as a DaemonSet, spawning a pod on each node in the cluster. This is important, as each node needs to have the iptables altered by the tool, and it needs to run anytime a new node is created. To deploy the agent, we need to create the DaemonSet in our cluster. The ip-masq-agent-v2 repo includes several examples, including an example of deploying the DaemonSet. The example is slightly out of date on the version of ip-masq-agent-v2 to use, so make sure you update this to the latest version. If you would prefer to build and manage your own containers for this, the repository also includes a Dockerfile to allow you to do this. Below is an example deployment using the Microsoft-hosted images. It references the ConfigMap we will create in the next step, and it is important that the same name is used as is referenced here. apiVersion: apps/v1 kind: DaemonSet metadata: name: ip-masq-agent namespace: kube-system labels: component: ip-masq-agent kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: selector: matchLabels: k8s-app: ip-masq-agent template: metadata: labels: k8s-app: ip-masq-agent spec: hostNetwork: true containers: - name: ip-masq-agent image: mcr.microsoft.com/aks/ip-masq-agent-v2:v0.1.15 imagePullPolicy: Always securityContext: privileged: false capabilities: add: ["NET_ADMIN", "NET_RAW"] volumeMounts: - name: ip-masq-agent-volume mountPath: /etc/config readOnly: true volumes: - name: ip-masq-agent-volume projected: sources: - configMap: name: ip-masq-agent-config optional: true items: - key: ip-masq-agent path: ip-masq-agent mode: 0444 Once you deploy this DaemonSet, you should see instances of the agent running on each node in your cluster. Create Configuration Next, we need to create a ConfigMap that contains any configuration data we need to vary from the default deployed with the agent. The main thing we need to configure is the IP ranges that will be masqueraded as an agent IP. The default deployment of ip-masq-agent-v2 disables masquerading for all three private IP ranges specified by RFC 1918 (10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16). In our example above, this will therefore not masquerade traffic to the 10.1.64.0/18 subnet in the app network, and our routing problem will still exist. We need to amend the configuration so that these private IPs are masqueraded. However, we do want to avoid masquerading within our AKS network, as this traffic needs to come from the pod IPs. Therefore, we need to ensure we do not masquerade for traffic going from the pods to: The pod subnet The node subnet The AKS service CIDR range, for internal networking in AKS To do this, we need to add these IP ranges to the nonMasqueradeCIDRs array in the configuration. This is the list of IP addresses which, when traffic is sent to them, will continue to come from the pod IP and not the node IP. In addition, the configuration also allows us to define if we masquerade the link-local IPs, which we do not want to do. Below is an example ConfigMap that works for the setup detailed above. apiVersion: v1 kind: ConfigMap metadata: name: ip-masq-agent-config namespace: kube-system labels: component: ip-masq-agent kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: EnsureExists data: ip-masq-agent: |- nonMasqueradeCIDRs: - 10.0.0.0/16 # Entire VNet and service CIDR - 192.168.0.0/16 masqLinkLocal: false masqLinkLocalIPv6: false There are a couple of things to be aware of here: The node subnet and AKS Service CIDR are two contiguous address spaces in my setup, so both are covered by 10.0.0.0/16. I could have called them out separately. 192.168.0.0/16 covers the whole of my pod subnet. I do not enable masquerading on link-local. The ConfigMap needs to be created in the same namespace as the DaemonSet. The ConfigMap name needs to match what is used in the mount in the DaemonSet manifest. Once you apply this configuration, the agent will pick up the new configuration changes within around 60 seconds. Once these are applied, you should find that traffic going to private addresses outside of the list of nonMasqueradeCIDRs will now present from the node IP. Summary If you’re deploying AKS into an IP-constrained environment, overlay networking is generally the best and simplest option. It allows you to use non-routed pod IP ranges, conserve address space, and avoid complex routing considerations without additional configuration. If you can use it, then this should be your default approach. However, there are cases where overlay networking will not meet your needs. You might require features only available with pod subnet mode, such as the ability to send traffic directly to pods and nodes without tunnelling, or support for features like Virtual Nodes. In these situations, you can still keep your pod subnet private and non-routed by carefully controlling IP masquerading. With ip-masq-agent-v2, you can configure which destinations should (and should not) be NAT’d, ensuring isolated subnets while maintaining the functionality you need.357Views0likes0CommentsSimplifying Outbound Connectivity Troubleshooting in AKS with Connectivity Analysis (Preview)
Announce the Connectivity Analysis feature for AKS, now available in Public Preview and available through the AKS Portal. You can use the Connectivity Analysis (Preview) feature to quickly verify whether outbound traffic from your AKS nodes is being blocked by Azure network resources such as Azure Firewall, Network Security Groups (NSGs), route tables, and more.710Views1like0CommentsAnnouncing Windows Server vNext Preview Build 26461
Announcing Windows Server vNext Preview Build 26461 Hello Windows Server Insiders! Today we are pleased to release a new build of the next Windows Server Long-Term Servicing Channel (LTSC) Preview that contains both the Desktop Experience and Server Core installation options for Datacenter and Standard editions, Annual Channel for Container Host and Azure Edition (for VM evaluation only). Branding remains, Windows Server 2025, in this preview - when reporting issues please refer to Windows Server vNext preview. If you signed up for Server Flighting, you should receive this new build automatically. What's New Rack Level Nested Mirror (RLNM) for S2D Campus Cluster Rack Level Nested Mirror (RLNM) for S2D Campus Cluster enables customers to meet NIS2 two data room requirements for their factories by providing fast and resilient storage using Storage Spaces Direct (S2D). For S2D Campus Cluster, we recommend using all-flash storage (SSD or NVMe drives), all capacity (no cache drives), and RDMA NICs (iWARP, RoCE, or InfiniBand). Note: Rack fault domains must be created on the cluster in order to use this feature – a new cluster must be created: #Create a test cluster but do not create storage: New-Cluster -Name TestCluster -Node Node1, Node2, Node3, Node4 -NoStorage #Define the fault domains for the cluster – two nodes are in “Room1” and two nodes are in “Room2”: Set-ClusterFaultDomain -XML @”<Topology><Site Name=”Redmond”><Rack Name=”Room1”><Node Name=”Node1”/><Node Name=”Node2”/></Rack><Rack Name=”Room2”><Node Name=”Node3”/> <Node Name=”Node4”/></Rack></Site></Topology>”@ #Add Storage Spaces Direct (S2D) Storage to the cluster – note that Enable-ClusterS2D cmdlet can also be used: Enable-ClusterStorageSpacesDirect #Check that the Storage Pool’s FaultDomainAwareness property is set to StorageRack: Get-storagepool -FriendlyName <S2DStoragePool> | fl #Create a four-copy volume on the storage pool: New-Volume -FriendlyName “FourCopyVolume” -StoragePoolFriendlyName S2D* -FileSystem CSVFS_ReFS –Size 500GB -PhysicalDiskRedundancy 3 -ProvisioningType Fixed -NumberOfDataCopies 4 –NumberOfColumns 3 #Create a four-copy volume on the storage pool, thinly provisioned: New-Volume -FriendlyName “FourCopyVolume” -StoragePoolFriendlyName S2D* -FileSystem CSVFS_ReFS –Size 500GB -PhysicalDiskRedundancy 3 –ProvisioningType Thin -NumberOfDataCopies 4 –NumberOfColumns 3 Windows Server Flighting is here!! If you signed up for Server Flighting, you should receive this new build automatically later today. For more information, see Welcome to Windows Insider flighting on Windows Server - Microsoft Community Hub Feedback Hub app is now available for Server Desktop users! The app should automatically update with the latest version, but if it does not, simply Check for updates in the app’s settings tab. Known Issues Download Windows Server Insider Preview (microsoft.com) Flighting: The label for this flight may incorrectly reference Windows 11. However, when selected, the package installed is the Windows Server update. Please ignore the label and proceed with installing your flight. This issue will be addressed in a future release. Available Downloads Downloads to certain countries may not be available. See Microsoft suspends new sales in Russia - Microsoft On the Issues. Windows Server Long-Term Servicing Channel Preview in ISO format in 18 languages, and in VHDX format in English only. Windows Server Datacenter Azure Edition Preview in ISO and VHDX format, English only. Microsoft Server Languages and Optional Features Preview Keys: Keys are valid for preview builds only Server Standard: MFY9F-XBN2F-TYFMP-CCV49-RMYVH Datacenter: 2KNJJ-33Y9H-2GXGX-KMQWH-G6H67 Azure Edition does not accept a key. Symbols: Available on the public symbol server – see Using the Microsoft Symbol Server. Expiration: This Windows Server Preview will expire September 15, 2026. How to Download Registered Insiders may navigate directly to the Windows Server Insider Preview download page. If you have not yet registered as an Insider, see GETTING STARTED WITH SERVER on the Windows Insiders for Business portal. We value your feedback! The most important part of the release cycle is to hear what's working and what needs to be improved, so your feedback is extremely valued. Please use the new Feedback Hub app for Windows Server if you are running a Desktop version of Server. If you are using a Core edition, or if you are unable to use the Feedback Hub app, you can use your registered Windows 10 or Windows 11 Insider device and use the Feedback Hub application. In the app, choose the Windows Server category and then the appropriate subcategory for your feedback. In the title of the Feedback, please indicate the build number you are providing feedback on as shown below to ensure that your issue is attributed to the right version: [Server #####] Title of my feedback See Give Feedback on Windows Server via Feedback Hub for specifics. The Windows Server Insiders space on the Microsoft Tech Communities supports preview builds of the next version of Windows Server. Use the forum to collaborate, share and learn from experts. For versions that have been released to general availability in market, try the Windows Server for IT Pro forum or contact Support for Business. Diagnostic and Usage Information Microsoft collects this information over the internet to help keep Windows secure and up to date, troubleshoot problems, and make product improvements. Microsoft server operating systems can be configured to turn diagnostic data off, send Required diagnostic data, or send Optional diagnostic data. During previews, Microsoft asks that you change the default setting to Optional to provide the best automatic feedback and help us improve the final product. Administrators can change the level of information collection through Settings. For details, see http://aka.ms/winserverdata. Also see the Microsoft Privacy Statement. Terms of Use This is pre-release software - it is provided for use "as-is" and is not supported in production environments. Users are responsible for installing any updates that may be made available from Windows Update. All pre-release software made available to you via the Windows Server Insider program is governed by the Insider Terms of Use.619Views1like0CommentsAnnouncing Native Azure Functions Support in Azure Container Apps
Azure Container Apps is introducing a new, streamlined method for running Azure Functions directly in Azure Container Apps (ACA). This integration allows you to leverage the full features and capabilities of Azure Container Apps while benefiting from the simplicity of auto-scaling provided by Azure Functions. With the new native hosting model, you can deploy Azure Functions directly onto Azure Container Apps using the Microsoft.App resource provider by setting “kind=functionapp” property on the container app resource. You can deploy Azure Functions using ARM templates, Bicep, Azure CLI, and the Azure portal. Get started today and explore the complete feature set of Azure Container Apps, including multi-revision management, easy authentication, metrics and alerting, health probes and many more. To learn more, visit: https://aka.ms/fnonacav24.6KViews2likes1CommentJust built my first microservice API, and it's hacky. Any examples to follow?
I just finished building my first microservices-based API, and it's ugly. Are there any online source examples that include two (or more) API microservices communicating with each other over a message bus, that also include authentication using both a local authentication database and/or OpenID Connect?42Views0likes0CommentsCustomising Node-Level Configuration in AKS
When you deploy AKS, you deploy the control plan, which is managed by Microsoft, and one or more node pools, which contain the worker nodes used to run your Kubernetes workloads. These node pools are usually deployed as Virtual Machine Scale Sets. These scale sets are visible in your subscription, but generally you would not make changes to these directly, as they will be managed by AKS and all of the configuration and management of these is done through AKS. However, there are some scenarios where you do need to make changes to the underlying node configuration to be able to handle the workloads you need to run. Whilst you can make some changes to these nodes, you need to make sure you do it in a supported manner, which will be applied consistently to all your nodes. An example of this requirement is a recent issue I saw with deploying Elasticsearch onto AKS. Let's take a look at this issue and see how it can be resolved, both for this specific issue, but also for any other scenario were you need to make changes on the nodes. The Issue For the rest of this article, we will use a specific scenario to illustrate the requirement to make node changes, but this could be applied to any requirement to make changes to the nodes. Elasticsearch has a requirement where it needs to increase the limit on mmap count, due to the way it uses "mmapfs" for storing indices. The docs state you can resolve this by running: sysctl -w vm.max_map_count=262144 This command needs to be run on the machine that is running the container, not inside the container. In our case, this is the AKS nodes. Whilst this is fairly easy to do on my laptop, this isn't really feasible to run manually on all of our AKS nodes, especially because nodes could be destroyed and recreated during updates or downtime. We need to make the changes consistently on all nodes, and automate the process so it is applied to all nodes, even new ones. Changes to Avoid Whilst we want to make changes to our nodes, we want to do so in a way that doesn't result in our nodes being in an unsupported state. One key example of this is making changes directly to the scale set. Using the IaaS/ARM APIs to make changes directly to the scale set, outside of Kubernetes, will result in your nodes being unsupported and should be avoided. This includes making changes to the CustomScriptExtension configured on the scale set. Similarly, we want to avoid SSH'ing into the nodes operating system and making the changes manually. Whilst this will apply the change you want, as soon as that node is destroyed and recreated, your change will be gone. Similarly, if you want to use the node autoscaler, any new nodes won't have your changes. Solutions There are a few different options that we could use to solve this issue and customise our node configuration. Let's take a look at them in order of ease of use. 1. Customised Node Configuration The simplest method to customise node configuration is through the use of node configuration files that can be applied at the creation of a cluster or a node pool. Using these configuration files you are able to customise a specific set of configuration settings for both the Node Operating System and the Kubelet configuration. Below is an example of a Linux OS configuration: { "transparentHugePageEnabled": "madvise", "transparentHugePageDefrag": "defer+madvise", "swapFileSizeMB": 1500, "sysctls": { "netCoreSomaxconn": 163849, "netIpv4TcpTwReuse": true, "netIpv4IpLocalPortRange": "32000 60000" } } We would then apply this at the time of creating a cluster or node pool by providing the file to the CLI command. For example, creating a cluster: az aks create --name myAKSCluster --resource-group myResourceGroup --linux-os-config ./linuxosconfig.jsonaz aks create --name myAKSCluster --resource-group myResourceGroup --linux-os-config ./linuxosconfig.json Creating a node pool: az aks nodepool add --name mynodepool1 --cluster-name myAKSCluster --resource-group myResourceGroup --kubelet-config ./linuxkubeletconfig.json There are lots of different configuration settings that can be changed for both OS and Kublet, for both Linux and Windows nodes. The full list can be found here. For our scenario where want to change the vm.max_map_count setting. This is available as one of the configuration options in the virtual memory section. Our OS configuration would look like this: { "vmMaxMapCount": 262144 } Note that the value used in the JSON is a camel case version of the property name, so vm.max_map_count becomes vmMaxMapCount 2. Daemonsets Another way we can make these changes using a Kubernetes native method is through the use of Daemonsets. As you may know, Daemonsets provide a mechanism to run a pod on every node in your cluster. We can use this Daemonset to execute a script that sets the appropriate settings on the nodes when run, and the Deamonset will ensure that this is done on every node, including any new nodes that get created but the autoscaler or during updates. To be able to make changes to the node, we will need to run the Daemonset with some elevated privileges, and so you may want to consider whether the node customisation file option, listed above, works for your scenario, before using this option. For this to work, we need two things, a container to run, and a Daemonset configuration. Container All our Daemonset does is run a container, it's the container that defines what is done. There are two options that we can use for our scenario: Create our own container that has the script to run defined in the Docker file. Use a pre-built container, like BusyBox, which accepts parameters defining what commands to run. The first option is a more secure option, as the container is fixed to running only the command you want, and any malicious changes would require someone to re-build and publish a new image and update the Daemonset configuration to run it. The image we create is very basic, it just needs to have the tools you require for your script installed, and then run your script. The only caveat to this is that Daemonsets need to have their Restart Policy set to always, and so we can't just run our script and stop, as the container will just be restarted. To avoid this, we can have our container sleep once it is done. If the node is ever restarted or replaced, the container will still run again. Here is the most simple Dockerfile we can use to solve our Elasticsearch issue: FROM alpine CMD sysctl -w vm.max_map_count=262144; sleep 365d Daemonset Configuration To run our Daemonset, we need to configure our YAML to do the following: Run our custom container, or use a pre-built container with the right parameters Grant the Daemonset the required privileges to be able to make changes to the node Set the Restart Policy to Always If we want, we can also restrict our Daemonset to only run on nodes that we know are going to run this workload. For example, we can restrict this to only run on a specific node pool in AKSm using a node selector. apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: node-config spec: template: metadata: labels: name: node-config spec: containers: - name: node-config image: scdemo.azurecr.io/node-config:1 securityContext: privileged: true restartPolicy: Always nodeSelector: agentpool: elasticsearch Once we deploy this to our cluster, the Daemonset will run and make the changes we require. When using a Daemonset or Init container approach, pay special attention to security. This container will run in privileged mode, which gives it a high level of permissions, and not just the ability to change the specific configuration setting you are interested in. Ensure that access to these containers and their configuration is restricted. Consider using init containers if possible as their runtime is more limited. 3. Init Containers This is a similar approach to Dameonsets, but instead of running on everything node, we use an init container in our application to only run on the nodes where our application is present. An init container allows us to specify that a specific container must run, and complete successfully prior to our main application being run. We can take our container that runs our custom script, as with the Daemonset option, and run this as an init container instead. The benefit of this approach is that the init container only runs once when the application is started, and then stops. This avoids needing to have the sleep command that keeps the process running at all times. The downside is that using an init container requires editing the YAML for the application you are deploying, which may be difficult or impossible if you are using a third party application. Some third party applications will have Helm charts or similar configured that do allow passing in custom init containers, but many do now. If you are creating your own applications then this is easier. Below is an example using this approach, in this example we use a pre-built container (BusyBox) for running our script, rather than a custom container. Either approach can be used. apiVersion: v1 kind: Pod metadata: name: app-pod labels: app.kubernetes.io/name: MyApp spec: containers: - name: main-app image: scdemo.azurecr.io/main-app:1 initContainers: - name: init-sysctl image: busybox command: - sysctl - -w - vm.max_map_count=262144 imagePullPolicy: IfNotPresent securityContext: privileged: true Conclusions Making changes to underlying AKS nodes is something that most people won't need to do, most of the time. However, there are some scenarios you may hit where this is important. AKS comes with functionality to do is in a controlled and supported manner via the use the of configuration files. This approach is recommended if the configuration you need to change is supported, as is simpler to implement, doesn't require creating custom containers and is the most secure approach. If the change you need is not supported then you still have a way to deal with this via Daemonsets or Init containers, but special attention should be paid to security when using this solution.408Views0likes0Comments