containers
342 TopicsAnnouncing Windows Server vNext Preview Build 26433
Announcing Windows Server vNext Preview Build 26433 Hello Windows Server Insiders! Today we are pleased to release a new build of the next Windows Server Long-Term Servicing Channel (LTSC) Preview that contains both the Desktop Experience and Server Core installation options for Datacenter and Standard editions, Annual Channel for Container Host and Azure Edition (for VM evaluation only). Branding remains, Windows Server 2025, in this preview - when reporting issues please refer to Windows Server vNext preview. If you signed up for Server Flighting, you should receive this new build automatically. What's New Windows Admin Center (WAC) Windows Server preview customers can download and install Windows Admin Center right from the Windows Server Desktop using the in-OS app that takes care of downloading and guides you through the installation process. Note: You must be running a desktop version of Windows Server Datacenter or Standard preview to access this feature. Windows Server Flighting is here!! If you signed up for Server Flighting, you should receive this new build automatically later today. For more information, see Welcome to Windows Insider flighting on Windows Server - Microsoft Community Hub. Feedback Hub app is now available for Server Desktop users! The app should automatically update with the latest version, but if it does not, simply Check for updates in the app’s settings tab. Known Issues Download Windows Server Insider Preview (microsoft.com) Flighting: The label for this flight may incorrectly reference Windows 11. However, when selected, the package installed is the Windows Server update. Please ignore the label and proceed with installing your flight. This issue will be addressed in a future release. Available Downloads Downloads to certain countries may not be available. See Microsoft suspends new sales in Russia - Microsoft On the Issues. Windows Server Long-Term Servicing Channel Preview in ISO format in 18 languages, and in VHDX format in English only. Windows Server Datacenter Azure Edition Preview in ISO and VHDX format, English only. Microsoft Server Languages and Optional Features Preview Keys: Keys are valid for preview builds only. Server Standard: MFY9F-XBN2F-TYFMP-CCV49-RMYVH Datacenter: 2KNJJ-33Y9H-2GXGX-KMQWH-G6H67 Azure Edition does not accept a key Symbols: Available on the public symbol server – see Using the Microsoft Symbol Server. Expiration: This Windows Server Preview will expire September 15, 2025. How to Download Registered Insiders may navigate directly to the Windows Server Insider Preview download page. If you have not yet registered as an Insider, see GETTING STARTED WITH SERVER on the Windows Insiders for Business portal. We value your feedback! The most important part of the release cycle is to hear what's working and what needs to be improved, so your feedback is extremely valued. Please use the new Feedback Hub app for Windows Server if you are running a Desktop version of Server. If you are using a Core edition, or if you are unable to use the Feedback Hub app, you can use your registered Windows 10 or Windows 11 Insider device and use the Feedback Hub application. In the app, choose the Windows Server category and then the appropriate subcategory for your feedback. In the title of the Feedback, please indicate the build number you are providing feedback on as shown below to ensure that your issue is attributed to the right version: [Server #####] Title of my feedback See Give Feedback on Windows Server via Feedback Hub for specifics. The Windows Server Insiders space on the Microsoft Tech Communities supports preview builds of the next version of Windows Server. Use the forum to collaborate, share and learn from experts. For versions that have been released to general availability in market, try the Windows Server for IT Pro forum or contact Support for Business. Diagnostic and Usage Information Microsoft collects this information over the internet to help keep Windows secure and up to date, troubleshoot problems, and make product improvements. Microsoft server operating systems can be configured to turn diagnostic data off, send Required diagnostic data, or send Optional diagnostic data. During previews, Microsoft asks that you change the default setting to Optional to provide the best automatic feedback and help us improve the final product. Administrators can change the level of information collection through Settings. For details, see http://aka.ms/winserverdata. Also see the Microsoft Privacy Statement. Terms of Use This is pre-release software - it is provided for use "as-is" and is not supported in production environments. Users are responsible for installing any updates that may be made available from Windows Update. All pre-release software made available to you via the Windows Server Insider program is governed by the Insider Terms of Use.213Views1like0CommentsRunning Self-hosted APIM Gateways in Azure Container Apps with VNet Integration
With Azure Container Apps we can run containerized applications, completely serverless. The platform itself handles all the orchestration needed to dynamically scale based on your set triggers (such as KEDA) and even scale-to-zero! I have been working a lot with customers recently on using Azure API Management (APIM) and the topic of how we can leverage Azure APIM to manage our internal APIs without having to expose a public IP and stay within compliance from a security standpoint, which leads to the use of a Self-Hosted Gateway. This offers a managed gateway deployed within their network, allowing a unified approach in managing their APIs while keeping all API communication in-network. The self-hosted gateway is deployed as a container and in this article, we will go through how to provision a self-hosted gateway on Azure Container Apps specifically. I assume there is already an Azure APIM instance provisioned and will dive into creating and configuring the self-hosted gateway on ACA. Prerequisites As mentioned, ensure you have an existing Azure API Management instance. We will be using the Azure CLI to configure the container apps in this walkthrough. To run the commands, you need to have the Azure CLI installed on your local machine and ensure you have the necessary permissions in your Azure subscription. Retrieve Gateway Deployment Settings from APIM First, we need to get the details for our gateway from APIM. Head over to the Azure portal and navigate to your API Management instance. - In the left menu, under Deployment and infrastructure, select Gateways. - Here, you'll find the gateway resource you provisioned. Click on it and go to Deployment. - You'll need to copy the Gateway Token and Configuration endpoint values. (these tell the self-hosted gateway which APIM instance and Gateway to register under) Create a Container Apps Environment Next, we need to create a Container Apps environment. This is where we will create the container app in which our self-hosted gateway will be hosted. Using Azure CLI: Create our VNet and Subnet for our ACA Environment As we want access to our internal APIs, when we create the container apps environment, we need to have the VNet created with a subnet available. Note: If we’re using Workload Profiles (we will in this walkthrough), then we need to delegate the subnet to Microsoft.App/environments. # Create the vnet az network vnet create --resource-group rgContosoDemo \ --name vnet-contoso-demo \ --location centralUS \ --address-prefix 10.0.0.0/16 # Create the subnet az network vnet subnet create --resource-group rgContosoDemo \ --vnet-name vnet-contoso-demo \ --name infrastructure-subnet \ --address-prefixes 10.0.0.0/23 # If you are using a workload profile (we are for this walkthrough) then delegate the subnet az network vnet subnet update --resource-group rgContosoDemo \ --vnet-name vnet-contoso-demo \ --name infrastructure-subnet \ --delegations Microsoft.App/environments Create the Container App Environment in out VNet az containerapp env create --name aca-contoso-env \ --resource-group rgContosoDemo \ --location centralUS \ --enable-workload-profiles Deploy the Self-Hosted Gateway to a Container App Creating the environment takes about 10 minutes and once complete, then comes the fun part—deploying the self-hosted gateway container image to a container app. Using Azure CLI: Create the Container App: az containerapp create --name aca-apim-demo-gateway \ --resource-group rgContosoDemo \ --environment aca-contoso-env \ --workload-profile-name "Consumption" \ --image "mcr.microsoft.com/azure-api-management/gateway:2.5.0" \ --target-port 8080 \ --ingress 'external' \ ---env-vars "config.service.endpoint"="<YOUR_ENDPOINT>" "config.service.auth"="<YOUR_TOKEN>" "net.server.http.forwarded.proto.enabled"="true" Here, you'll replace <YOUR_ENDPOINT> and <YOUR_TOKEN> with the values you copied earlier. Configure Ingress for the Container App: az containerapp ingress enable --name aca-apim-demo-gateway --resource-group rgContosoDemo --type external --target-port 8080 This command ensures that your container app is accessible externally. Verify the Deployment Finally, let's make sure everything is running smoothly. Navigate to the Azure portal and go to your Container Apps environment. Select the container app you created (aca-apim-demo-gateway) and navigate to Replicas to verify that it's running. You can use the status endpoint of the self-hosted gateway to determine if your gateway is running as well: curl -i https://aca-apim-demo-gateway.sillytreats-abcd1234.centralus.azurecontainerapps.io/status-012345678990abcdef Verify Gateway Health in APIM You can navigate in the Azure Portal to APIM and verify the gateway is showing up as healthy. Navigate to Deployment and Infrastructure, select Gateways then choose your Gateway. On the Overview page you’ll see the status of your gateway deployment. And that’s it! You've successfully deployed an Azure APIM self-hosted gateway in Azure Container Apps with VNet integration allowing access to your internal APIs with easy management from the APIM portal in Azure. This setup allows you to manage your APIs efficiently while leveraging the scalability and flexibility of Azure Container Apps. If you have any questions or need further assistance, feel free to ask. How are you feeling about this setup? Does it make sense, or is there anything you'd like to dive deeper into?534Views1like2CommentsgMSA on AKS and Private Endpoints
A few weeks ago, I spent some time with our support and engineering teams helping a customer solve a problem that happened after they enabled Group Managed Service Accounts (gMSA) on Azure Kubernetes Service (AKS). I decided to write this blog so other customers with the same issue can avoid going through it altogether. I’m writing the blog in the sequence as I experienced it, but if you’re just looking for the solution, feel free to skip to the end. The gMSA on AKS symptoms When that customer enabled gMSA on their cluster, a few things started to happen: Any gMSA enabled deployment/container/pod entered a failed state. The events from the deployments would show the pods with the following error: Event Detail: Failed to setup the external credentials for Container '<redacted>': The RPC server is unavailable. Any non-gMSA deployment/container/pod using the customer’s private images and running on Windows nodes also entered a failed state. The deployments were showing an event of ErrImagePull. All other deployments/containers/pods both on Windows and Linux nodes that were not using private images kept their healthy state. Removing the gMSA configuration from the cluster would automatically revert to a healthy state for the entire cluster. Troubleshooting gMSA issues The error with the gMSA pods took me immediately to other cases in which I’ve seen customers having similar issues because of network connectivity. The most common gMSA issues I have seen so far are: Blocked ports: Having a firewall between your AKS cluster and the Active Directory (AD) Domain Controllers (DCs). AD uses multiple protocols for communication between clients and DCs. I even created a simple script that validates the ports. Incorrect DNS configuration: AD uses DNS for service discovery. Domain Controllers have a “SRV” entry in the DNS that clients query so they can find not only all DCs, but the closest one. If either the nodes or pods can’t resolve the domain fqdn to a DC, gMSA won’t work. Incorrect secret on Azure Key Vault (AKV): A user account is used by the Window nodes, rather than a computer account as the nodes are not domain-joined. The format of the secret should be <domain dns fqdn>\<user account>:<user password>. There are other minor issues that I’ve seen, but these are the main ones. In the case of this customers, we reviewed the above and everything seemed to be configured properly. At that point, I brought other folks and they caught on something that I knew existed, but had not seen using gMSA yet: AKS private clusters. Private Endpoints and gMSA This customer has a security policy in-place that mandates Azure resources should be using private endpoints whenever possible. That was true for the AKS cluster and therefore, it introduced a behavior that broke the cluster. I mentioned above that gMSA uses DNS for DC finding. Let me explain what the default config is and what happened after enabling gMSA: By default, Linux and Windows nodes on AKS will use the Azure vNet DNS server for DNS queries. Windows and Linux pods will use CoreDNS for DNS queries. Azure DNS can’t resolve AD domain FQDNs since these tend to be private to on-premises or non-public cloud networks. For that reason, when you enable gMSA and pass the parameter of DNS server to be used, two things are changed in the AKS cluster. First, the Windows nodes will start using the DNS server provided. Second, the CoreDNS setting is changed to add a forwarder. This forwards anything related to the domain FQDN to the specified DNS server. With these two configs, Windows nodes and Windows pods can now “find” the DCs. However, this introduces another issue when combined with a private AKS cluster. Private endpoints are behind a private DNS zone. Azure DNS servers can resolve for those zones, but non-Azure DNS servers can’t. Since now the Windows nodes and Windows pods are using a DNS server outside of Azure, the private zone of the AKS cluster can’t be resolved so the DCs can’t access the Windows nodes and Windows pods. Not only that, but this customer also had their Azure Container Registry (ACR) behind a private endpoint. The second symptom above was also caused by this configuration, as now the Windows nodes can’t resolve for the private zone of the ACR registry and consequently can’t pull their private images. For reference, these are the container related services and their private zones: Private link resource type Subresource Private DNS zone name Public DNS zone forwarders Azure Kubernetes Service - Kubernetes API (Microsoft.ContainerService/managedClusters) management privatelink.{regionName}.azmk8s.io {subzone}.privatelink.{regionName}.azmk8s.io {regionName}.azmk8s.io Azure Container Apps (Microsoft.App/ManagedEnvironments) managedEnvironments privatelink.{regionName}.azurecontainerapps.io azurecontainerapps.io Azure Container Registry (Microsoft.ContainerRegistry/registries) registry privatelink.azurecr.io {regionName}.data.privatelink.azurecr.io azurecr.io {regionName}.data.azurecr.io For a full list of zones, check out the Azure documentation. Solving for DNS query on Azure Private Endpoint zones The solution here is simple. For the non-Azure DNS servers to resolve Private Endpoint zones, a DNS forwarder can be created. This customer had a very specific implementation, but in general what you need to configure is a DNS forwarder to the zones related to the services you are using. For example: - AKS clusters: Create a forwarder of azmk8s.io to 168.63.129.16. - For ACR registries: Create a forwarder of azurecr.io to 168.63.129.16. 168.63.129.16. is the virtual IP address of the Azure platform that serves as the communication channel to the platform resources. One of its services is DNS. In fact, this is the original service that the Windows nodes and Windows pods were using before gMSA was enabled. Conclusion It’s always DNS! If you are using gMSA on AKS, keep in mind that Windows nodes and Windows pods will start using a DNS server outside of Azure (or that has no visibility into the Azure platform directly, such as Private Endpoint zones). You might need to configure DNS forwarders once you start using gMSA on AKS, although this will be true for any service. I hope this blog post helps you avoid this issue – or helps you troubleshoot it. Let us know in the comments!212Views0likes0CommentsAnnouncing Native Azure Functions Support in Azure Container Apps
Azure Container Apps is introducing a new, streamlined method for running Azure Functions directly in Azure Container Apps (ACA). This integration allows you to leverage the full features and capabilities of Azure Container Apps while benefiting from the simplicity of auto-scaling provided by Azure Functions. With the new native hosting model, you can deploy Azure Functions directly onto Azure Container Apps using the Microsoft.App resource provider by setting “kind=functionapp” property on the container app resource. You can deploy Azure Functions using ARM templates, Bicep, Azure CLI, and the Azure portal. Get started today and explore the complete feature set of Azure Container Apps, including multi-revision management, easy authentication, metrics and alerting, health probes and many more. To learn more, visit: https://aka.ms/fnonacav22.3KViews1like0Comments