cloud native
99 TopicsSecuring Cloud Shell Access to AKS
Azure Cloud Shell is an online shell hosted by Microsoft that provides instant access to a command-line interface, enabling users to manage Azure resources without needing local installations. Cloud Shell comes equipped with popular tools and programming languages, including Azure CLI, PowerShell, and the Kubernetes command-line tool (kubectl). Using Cloud Shell can provide several benefits for administrators who need to work with AKS, especially if they need quick access from anywhere, or are in locked down environments: Immediate Access: There’s no need for local setup; you can start managing Azure resources directly from your web browser. Persistent Storage: Cloud Shell offers a file share in Azure, keeping your scripts and files accessible across multiple sessions. Pre-Configured Environment: It includes built-in tools, saving time on installation and configuration. The Challenge of Connecting to AKS By default, Cloud Shell traffic to AKS originates from a random Microsoft-managed IP address, rather than from within your network. As a result, the AKS API server must be publicly accessible with no IP restrictions, which poses a security risk as anyone on the internet can attempt to reach it. While credentials are still required, restricting access to the API server significantly enhances security. Fortunately, there are ways to lock down the API server while still enabling access via Cloud Shell, which we’ll explore in the rest of this article Options for Securing Cloud Shell Access to AKS Several approaches can be taken to secure the access to your AKS cluster while using Cloud Shell: IP Allow Listing On AKS clusters with a public API server, it is possible to lock down access to the API server with an IP allow list. Each Cloud Shell instance has a randomly selected outbound IP coming from the Azure address space whenever a new session is deployed. This means we cannot allow access to these IPs in advance, but we apply them once our session is running and this will work for the duration of our session. Below is an example script that you could run from Cloud Shell to check the current outbound IP address and allow it on your AKS clusters authorised IP list. #!/usr/bin/env bash set -euo pipefail RG="$1"; AKS="$2" IP="$(curl -fsS https://api.ipify.org)" echo "Adding ${IP} to allow list" CUR="$(az aks show -g "$RG" -n "$AKS" --query "apiServerAccessProfile.authorizedIpRanges" -o tsv | tr '\t' '\n' | awk 'NF')" NEW="$(printf "%s\n%s/32\n" "$CUR" "$IP" | sort -u | paste -sd, -)" if az aks update -g "$RG" -n "$AKS" --api-server-authorized-ip-ranges "$NEW" >/dev/null; then echo "IP ${IP} applied successfully"; else echo "Failed to apply IP ${IP}" >&2; exit 1; fi This method comes with some caveats: The users running the script would need to be granted permissions to update the authorised IP ranges in AKS - this permission could be used to add any IP address This script will need to be run each time a Cloud Shell session is created, and can take a few minutes to run The script only deals with adding IPs to the allow list, you would also need to implement a process to remove these IPs on a regular basis to avoid building up a long list of IPs that are no longer needed. Adding Cloud Shell IPs in bulk, through Service Tags or similar will result in your API server being accessible to a much larger range of IP addresses, and should be avoided. Command Invoke Azure provides a feature known as Command Invoke that allows you to send commands to be run in AKS, without the need for direct network connectivity. This method executes a container within AKS to run your command and then return the result, and works well from within Cloud Shell. This is probably the simplest approach that works with a locked down API server and the quickest to implement. However, there are some downsides: Commands take longer to run - when you execute the command, it needs to run a container in AKS, execute the command and then return the result. You only get exitCode and text output, and you lose API level details. All commands must be run within the context of the az aks command invoke CLI command, making commands much longer and complex to execute, rather than direct access with Kubectl Command Invoke can be a practical solution for occasional access to AKS, especially when the cost or complexity of alternative methods isn't justified. However, its user experience may fall short if relied upon as a daily tool. Further Details: Access a private Azure Kubernetes Service (AKS) cluster using the command invoke or Run command feature - Azure Kubernetes Service | Microsoft Learn Cloud Shell vNet Integration It is possible to deploy Cloud Shell into a virtual network (vNet), allowing it to route traffic via the vNet, and so access resources using private network, Private Endpoints, or even public resources, but using a NAT Gateway or Firewall for consistent outbound IP address. This approach uses Azure Relay to provide secure access to the vNet from Cloud Shell, without the need to open additional ports. When using Cloud Shell in this way, it does introduce additional cost for the Azure Relay service. Using this solution will require two different approaches, depending on whether you are using a private or public API server. When using a Private API server, which is either directly connected to the vNet, or configured with Private Endpoints, Cloud Shell will be able to connect directly to the private IP of this service over the vNet When using a Public API server, with a public IP, traffic for this will still leave the vNet and go to the internet. The benefit is that we can control the public IP used for the outbound traffic using a Nat Gateway or Azure Firewall. Once this is configured, we can then allow-list this fixed IP on the AKS API server authorised IP ranges. Further Details: Use Cloud Shell in an Azure virtual network | Microsoft Learn Azure Bastion Azure Bastion provides secure and seamless RDP and SSH connectivity to your virtual machines (VMs) directly from the Azure portal, without exposing them to the public internet. Recently, Bastion has also added support for direct connection to AKS with SSH, rather than needing to connect to a jump box and then use Kubectl from there. This greatly simplifies connecting to AKS, and also reduces the cost. Using this approach, we can deploy a Bastion into the vNet hosting AKS. From Cloud Shell we can then use the following command to create a tunnel to AKS. az aks bastion --name <aks name> --resource-group <resource group name> --bastion <bastion resource ID> Once this tunnel is connected, we can run Kubectl commands without any need for further configuration. As with Cloud Shell network integration, we take two slightly different approaches depending on whether the API server is public or private: When using a Private API server, which is either directly connected to the vNet, or configured with Private Endpoints, Cloud Shells connected via Bastion will be able to connect directly to the private IP of this service over the vNet When using a Public API server, with a public IP, traffic for this will still leave the vNet and go to the internet. As with Cloud Shell vNet integration, we can configure this to use a static outbound IP and allow list this on the API server. Using Bastion, we can still use NAT Gateway or Azure Firewall to achieve this, however you can also allow list the public IP assigned to the Bastion, removing the cost for NAT Gateway or Azure Firewall if these are not required for anything else. Connecting to AKS directly from Bastion requires the use of the Standard for Premium SKU of Bastion, which does have additional cost over the Developer or Basic SKU. This feature also requires that you enable native client support. Further details: Connect to AKS Private Cluster Using Azure Bastion (Preview) - Azure Bastion | Microsoft Learn Summary of Options IP Allow Listing The outbound IP addresses for Cloud Shell instances can be added to the Authorised IP list for your API server. As these IPs are dynamically assigned to sessions they would need to be added at runtime, to avoid adding a large list of IPs and reducing security. This can be achieved with a script. While easy to implement, this requires additional time to run the script with every new session, and increases the overhead for managing the Authorise IP list to remove unused IPs. Command Invoke Command Invoke allows you to run commands against AKS without requiring direct network access or any setup. This is a convenient option for occasional tasks or troubleshooting, but it’s not designed for regular use due to its limited user experience and flexibility. Cloud Shell vNet Integration This approach connects Cloud Shell directly to your virtual network, enabling secure access to AKS resources. It’s well-suited for environments where Cloud Shell is the primary access method and offers a more secure and consistent experience than default configurations. It does involve additional cost for Azure Relay. Azure Bastion Azure Bastion provides a secure tunnel to AKS that can be used from Cloud Shell or by users running the CLI locally. It offers strong security by eliminating public exposure of the API server and supports flexible access for different user scenarios, though it does require setup and may incur additional cost. Cloud Shell is a great tool for providing pre-configured, easily accessible CLI instances, but in the default configuration it can require some security compromises. With a little work, it is possible to make Cloud Shell work with a more secure configuration that limits how much exposure is needed for your AKS API server.92Views1like0CommentsSimplifying Outbound Connectivity Troubleshooting in AKS with Connectivity Analysis (Preview)
Announce the Connectivity Analysis feature for AKS, now available in Public Preview and available through the AKS Portal. You can use the Connectivity Analysis (Preview) feature to quickly verify whether outbound traffic from your AKS nodes is being blocked by Azure network resources such as Azure Firewall, Network Security Groups (NSGs), route tables, and more.655Views1like0CommentsAzure at KubeCon India 2025 | Hyderabad, India – 6-7 August 2025
Welcome to KubeCon + CloudNativeCon India 2025! We’re thrilled to join this year’s event in Hyderabad as a Gold sponsor, where we’ll be highlighting the newest innovations in Azure and Azure Kubernetes Service (AKS) while connecting with India’s dynamic cloud-native community. We’re excited to share some powerful new AKS capabilities that bring AI innovation to the forefront, strengthen security and networking, and make it easier than ever to scale and streamline operations. Innovate with AI AI is increasingly central to modern applications and competitive innovation, and AKS is evolving to support intelligent agents more natively. The AKS Model Context Protocol (MCP) server, now in public preview, introduces a unified interface that abstracts Kubernetes and Azure APIs, allowing AI agents to manage clusters more easily across environments. This simplifies diagnostics and operations—even across multiple clusters—and is fully open-source, making it easier to integrate AI-driven tools into Kubernetes workflows. Enhance networking capabilities Networking is foundational to application performance and security. This wave of AKS improvements delivers more control, simplicity, and scalability in networking: Traffic between AKS services can now be filtered by HTTP methods, paths, and hostnames using Layer-7 network policies, enabling precise control and stronger zero-trust security. Built-in HTTP proxy management simplifies cluster-wide proxy configuration and allows easy disabling of proxies, reducing misconfigurations while preserving future settings. Private AKS clusters can be accessed securely through Azure Bastion integration, eliminating the need for VPNs or public endpoints by tunneling directly with kubectl. DNS performance and resilience are improved with LocalDNS for AKS, which enables pods to resolve names even during upstream DNS outages, with no changes to workloads. Outbound traffic from AKS can now use static egress IP prefixes, ensuring predictable IPs for compliance and smoother integration with external systems. Cluster scalability is enhanced by supporting multiple Standard Load Balancers, allowing traffic isolation and avoiding rule limits by assigning SLBs to specific node pools or services. Network troubleshooting is streamlined with Azure Virtual Network Verifier, which runs connectivity tests from AKS to external endpoints and identifies misconfigured firewalls or routes. Strengthen security posture Security remains a foundational priority for Kubernetes environments, especially as workloads scale and diversify. The following enhancements strengthen protection for data, infrastructure, and applications running in AKS—addressing key concerns around isolation, encryption, and visibility. Confidential VMs for Azure Linux enable containers to run on hardware-encrypted, isolated VMs using AMD SEV-SNP, providing data-in-use protection for sensitive workloads without requiring code changes. Confidential VMs for Ubuntu 24.04 combine AKS’s managed Kubernetes with memory encryption and VM-level isolation, offering enhanced security for Linux containers in Ubuntu-based clusters. Encryption in transit for NFS secures data between AKS pods and Azure Files NFS volumes using TLS 1.3, protecting sensitive information without modifying applications. Web Application Firewall for Containers adds OWASP rule-based protection to containerized web apps via Azure Application Gateway, blocking common exploits without separate WAF appliances. The AKS Security Dashboard in Azure Portal centralizes visibility into vulnerabilities, misconfigurations, compliance gaps, and runtime threats, simplifying cluster security management through Defender for Cloud. Simplify and scale operations To streamline operations at scale, AKS is introducing new capabilities that automate resource provisioning, enforce deployment best practices, and simplify multi-tenant management—making it easier to maintain performance and consistency across complex environments. Node Auto-Provisioning improves resource efficiency by automatically adding and removing standalone nodes based on pod demand, eliminating the need for pre-created node pools during traffic spikes. Deployment Safeguards help prevent misconfigurations by validating Kubernetes manifests against best practices and optionally enforcing corrections to reduce instability and security risks. Managed Namespaces streamline multi-tenant cluster operations by providing a unified view of accessible namespaces across AKS clusters, along with quick access credentials via CLI, API, or Portal. Maximize performance and visibility To enhance performance and observability in large-scale environments, AKS is also rolling out infrastructure-level upgrades that improve monitoring capacity and control plane efficiency. Prometheus quotas in Azure Monitor can now be raised to 20 million samples per minute or active time series, ensuring full metric coverage for massive AKS deployments. Control plane performance has been improved with a backported Kubernetes enhancement (KEP-5116), reducing API server memory usage by ~10× during large listings and enabling faster kubectl responses with lower risk of OOM issues in AKS versions 1.31.9 and above. Microsoft is at KubeCon India 2025 - come say hi! Connect with us in Hyderabad! Microsoft has a strong on-site presence at KubeCon + CloudNativeCon India 2025. Here are some highlights of how you can connect with us at the event: August 6-7: Visit Microsoft at Booth G4 for live demos and expert Q&A throughout the conference. Microsoft engineers are also delivering several breakout sessions on AKS and cloud-native technologies. Microsoft Sessions: Throughout the conference, Microsoft engineers are speaking in various sessions, including: Keynote: The Last Mile Problem: Why AI Won’t Replace You (Yet) Lightning Talk: Optimizing SNAT Port and IP Address Management in Kubernetes Smart Capacity-Aware Volume Provisioning for LVM Local Storage Across Multi-Cluster Kubernetes Fleet Minimal OS, Maximum Impact: Journey To a Flatcar Maintainer We’re thrilled to connect with you at KubeCon + CloudNativeCon India 2025. Whether you attend sessions, drop by our booth, or watch the keynote, we look forward to discussing these announcements and hearing your thoughts. Thank you for being part of the community, and happy KubeCon! 👋461Views2likes0CommentsEnhancing Performance in Azure Container Apps
Azure Container Apps is a fully managed serverless container service that enables you to deploy and run applications without having to manage the infrastructure. The Azure Container Apps team has made improvements recently to the load balancing algorithm and scaling behavior to better align with customer expectations to meet their performance needs.6.6KViews3likes1CommentAnnouncing Azure Command Launcher for Java
Optimizing JVM Configuration for Azure Deployments Tuning the Java Virtual Machine (JVM) for cloud deployments is notoriously challenging. Over 30% of developers tend to deploy Java workloads with no JVM configuration at all, therefore relying on the default settings of the HotSpot JVM. The default settings in OpenJDK are intentionally conservative, designed to work across a wide range of environments and scenarios. However, these defaults often lead to suboptimal resource utilization in cloud-based deployments, where memory and CPU tend to be dedicated for application workloads (use of containers and VMs) but still require intelligent management to maximize efficiency and cost-effectiveness. To address this, we are excited to introduce jaz, a new JVM launcher optimized specifically for Azure. jaz provides better default ergonomics for Java applications running in containers and virtual machines, ensuring a more efficient use of resources right from the start, and leverages advanced JVM features automatically, such as AppCDS and in the future, Project Leyden. Why jaz? Conservative Defaults Lead to Underutilization of Resources When deploying Java applications to the cloud, developers often need to fine-tune JVM parameters such as heap size, garbage collection strategies, and other tuning configurations to achieve better resource utilization and potentially higher performance. The default OpenJDK settings, while safe, do not take full advantage of available resources in cloud environments, leading to unnecessary waste and increased operational costs. While advancements in dynamic heap sizing are underway by Oracle, Google, and Microsoft, they are still in development and will be available primarily in future major releases of OpenJDK. In the meantime, developers running applications on current and older JDK versions (such as OpenJDK 8, 11, 17, and 21) still need to optimize their configurations manually or rely on external tools like Paketo Buildpacks, which automate tuning but may not be suitable for all use cases. With jaz, we are providing a smarter starting point for Java applications on Azure, with default configurations designed for cloud environments. The jaz launcher helps by: Optimizing resource utilization: By setting JVM parameters tailored for cloud deployments, jaz reduces wasted memory and CPU cycles. Improve first-deploy performance: New applications often require trial and error to find the right JVM settings. jaz increases the likelihood of better performance on first deployment. Enhance cost efficiency: By making better use of available resources, applications using jaz can reduce unnecessary cloud costs. This tool is ideal for developers who: Want better JVM defaults without diving deep into tuning guides Develop and deploy cloud native microservices with Spring Boot, Quarkus, or Micronaut Prefer container-based workflows such as Kubernetes and OpenShift Deploy Java workloads on Azure Container Apps, Azure Kubernetes Service, Azure Red Hat OpenShift, or Azure VMs How jaz works? jaz sits between your container startup command and the JVM. It will: Detect the cloud environment (e.g., container limits, available memory) Analyzes the workload type and selects best-fit JVM options Launches the Java process with optimized flags, such as: Heap sizing GC selection and tuning Logging and diagnostics settings as needed Example Usage Instead of this: $ JAVA_OPTS="-XX:... several JVM tuning flags" $ java $JAVA_OPTS -jar myapp.jar" Use: $ jaz -jar myapp.jar You will automatically benefit from: Battle-tested defaults for cloud native and container workloads Reduced memory waste Better startup and warmup performance No manual tuning required How to Access jaz (Private Preview) jaz is currently available through a Private Preview. During this phase, we are working closely with selected customers to refine the experience and gather feedback. To request access: 👉 Submit your interest here Participants in the Private Preview will receive access to jaz via easily installed standalone Linux packages for container images of the Microsoft Build of OpenJDK and Eclipse Temurin (for Java 8). Customers will have direct communication with our engineering and product teams to further enhance the tool to fit their needs. For a sneak peek, you can read the documentation. Our Roadmap Our long-term vision for jaz includes adaptive JVM configuration based on telemetry and usage patterns, helping developers achieve optimal performance across all Azure services. ⚙️ JVM Configuration Profiles 📦 AppCDS Support 📦 Leyden Support 🔄 Continuous Tuning 📊 Share telemetry through Prometheus We’re excited to work with the Java community to shape this tool. Your feedback will be critical in helping us deliver a smarter, cloud-native Java runtime experience on Azure.384Views0likes0Comments