azure container registry
39 TopicsRethinking Ingress on Azure: Application Gateway for Containers Explained
Introduction Azure Application Gateway for Containers is a managed Azure service designed to handle incoming traffic for container-based applications. It brings Layer-7 load balancing, routing, TLS termination, and web application protection outside of the Kubernetes cluster and into an Azure-managed data plane. By separating traffic management from the cluster itself, the service reduces operational complexity while providing a more consistent, secure, and scalable way to expose container workloads on Azure. Service Overview What Application Gateway for Containers does Azure Application Gateway for Containers is a managed Layer-7 load balancing and ingress service built specifically for containerized workloads. Its main job is to receive incoming application traffic (HTTP/HTTPS), apply routing and security rules, and forward that traffic to the right backend containers running in your Kubernetes cluster. Instead of deploying and operating an ingress controller inside the cluster, Application Gateway for Containers runs outside the cluster, as an Azure-managed data plane. It integrates natively with Kubernetes through the Gateway API (and Ingress API), translating Kubernetes configuration into fully managed Azure networking behavior. In practical terms, it handles: HTTP/HTTPS routing based on hostnames, paths, headers, and methods TLS termination and certificate management Web Application Firewall (WAF) protection Scaling and high availability of the ingress layer All of this is provided as a managed Azure service, without running ingress pods in your cluster. What problems it solves Application Gateway for Containers addresses several common challenges teams face with traditional Kubernetes ingress setups: Operational overhead Running ingress controllers inside the cluster means managing upgrades, scaling, certificates, and availability yourself. Moving ingress to a managed Azure service significantly reduces this burden. Security boundaries By keeping traffic management and WAF outside the cluster, you reduce the attack surface of the Kubernetes environment and keep security controls aligned with Azure-native services. Consistency across environments Platform teams can offer a standard, Azure-managed ingress layer that behaves the same way across clusters and environments, instead of relying on different in-cluster ingress configurations. Separation of responsibilities Infrastructure teams manage the gateway and security policies, while application teams focus on Kubernetes resources like routes and services. How it differs from classic Application Gateway While both services share the “Application Gateway” name, they target different use cases and operating models. In the traditional model of using Azure Application Gateway is a general-purpose Layer-7 load balancer primarily designed for VM-based or service-based backends. It relies on centralized configuration through Azure resources and is not Kubernetes-native by design. Application Gateway for Containers, on the other hand: Is designed specifically for container platforms Uses Kubernetes APIs (Gateway API / Ingress) instead of manual listener and rule configuration Separates control plane and data plane more cleanly Enables faster, near real-time updates driven by Kubernetes changes Avoids running ingress components inside the cluster In short, classic Application Gateway is infrastructure-first, while Application Gateway for Containers is platform- and Kubernetes-first. Architecture at a Glance At a high level, Azure Application Gateway for Containers is built around a clear separation between control plane and data plane. This separation is one of the key architectural ideas behind the service and explains many of its benefits. Control plane and data plane The control plane is responsible for configuration and orchestration. It listens to Kubernetes resources—such as Gateway API or Ingress objects—and translates them into a running gateway configuration. When you create or update routing rules, TLS settings, or security policies in Kubernetes, the control plane picks up those changes and applies them automatically. The data plane is where traffic actually flows. It handles incoming HTTP and HTTPS requests, applies routing rules, performs TLS termination, and forwards traffic to the correct backend services inside your cluster. This data plane is fully managed by Azure and runs outside of the Kubernetes cluster, providing isolation and high availability by design. Because the data plane is not deployed as pods inside the cluster, it does not consume cluster resources and does not need to be scaled or upgraded by the customer. Managed components vs customer responsibilities One of the goals of Application Gateway for Containers is to reduce what customers need to operate, while still giving them control where it matters. Managed by Azure Application Gateway for Containers data plane Scaling, availability, and patching of the gateway Integration with Azure networking Web Application Firewall engine and updates Translation of Kubernetes configuration into gateway rules Customer-managed Kubernetes resources (Gateway API or Ingress) Backend services and workloads TLS certificates and references Routing and security intent (hosts, paths, policies) Network design and connectivity to the cluster This split allows platform teams to keep ownership of the underlying Azure infrastructure, while application teams interact with the gateway using familiar Kubernetes APIs. The result is a cleaner operating model with fewer moving parts inside the cluster. In short, Application Gateway for Containers acts as an Azure-managed ingress layer, driven by Kubernetes configuration but operated outside the cluster. This architecture keeps traffic management simple, scalable, and aligned with Azure-native networking and security services. Traffic Handling and Routing This section explains what happens to a request from the moment it reaches Azure until it is forwarded to a container running in your cluster. Traffic Flow: From Internet to Pod Azure Application Gateway for Containers (AGC) acts as the specialized "front door" for your Kubernetes workloads. By sitting outside the cluster, it manages high-volume traffic ingestion so your environment remains focused on application logic rather than networking overhead. The Request Journey Once a request is initiated by a client—such as a browser or an API—it follows a streamlined path to your container: 1. Entry via Public Frontend: The request reaches AGC’s public frontend endpoint. Note: While private frontends are currently the most requested feature and are under high-priority development, the service currently supports public-facing endpoints. 2. Rule Evaluation: AGC evaluates the incoming request against the routing rules you’ve defined using standard Kubernetes resources (Gateway API or Ingress). 3. Direct Pod Proxying: Once a rule is matched, AGC forwards the traffic directly to the backend pods within your cluster. 4. Azure Native Delivery: Because AGC operates as a managed data plane outside the cluster, traffic reaches your workloads via Azure networking. This removes the need for managing scaling or resource contention for in-cluster ingress pods. Flexibility in Security and Routing The architecture is designed to be as "hands-off" or as "hands-on" as your security policy requires: Optional TLS Offloading: You have full control over the encryption lifecycle. Depending on your specific use case, you can choose to perform TLS termination at the gateway to offload the compute-intensive decryption, or maintain encryption all the way to the container for end-to-end security. Simplified Infrastructure: By using AGC, you eliminate the "hop" typically required by in-cluster controllers, allowing the gateway to communicate with pods with minimal latency and high predictability. Kubernetes Integration Application Gateway for Containers is designed to integrate natively with Kubernetes, allowing teams to manage ingress behavior using familiar Kubernetes resources instead of Azure-specific configuration. This makes the service feel like a natural extension of the Kubernetes platform rather than an external load balancer. Gateway API as the primary integration model The Gateway API is the preferred and recommended way to integrate Application Gateway for Containers with Kubernetes. With the Gateway API: Platform teams define the Gateway and control how traffic enters the cluster. Application teams define routes (such as HTTPRoute) to expose their services. Responsibilities are clearly separated, supporting multi-team and multi-namespace environments. Application Gateway for Containers supports core Gateway API resources such as: GatewayClass Gateway HTTPRoute When these resources are created or updated, Application Gateway for Containers automatically translates them into gateway configuration and applies the changes in near real time. Ingress API support For teams that already use the traditional Kubernetes Ingress API, Application Gateway for Containers also provides Ingress support. This allows: Reuse of existing Ingress manifests A smoother migration path from older ingress controllers Gradual adoption of Gateway API over time Ingress resources are associated with Application Gateway for Containers using a specific ingress class. While fully functional, the Ingress API offers fewer capabilities and less flexibility compared to the Gateway API. How teams interact with the service A key benefit of this integration model is the clean separation of responsibilities: Platform teams Provision and manage Application Gateway for Containers Define gateways, listeners, and security boundaries Own network and security policies Application teams Define routes using Kubernetes APIs Control how their applications are exposed Do not need direct access to Azure networking resources This approach enables self-service for application teams while keeping governance and security centralized. Why this matters By integrating deeply with Kubernetes APIs, Application Gateway for Containers avoids custom controllers, sidecars, or ingress pods inside the cluster. Configuration stays declarative, changes are automated, and the operational model stays consistent with Kubernetes best practices. Security Capabilities Security is a core part of Azure Application Gateway for Containers and one of the main reasons teams choose it over in-cluster ingress controllers. The service brings Azure-native security controls directly in front of your container workloads, without adding complexity inside the cluster. Web Application Firewall (WAF) Application Gateway for Containers integrates with Azure Web Application Firewall (WAF) to protect applications against common web attacks such as SQL injection, cross-site scripting, and other OWASP Top 10 threats. A key differentiator of this service is that it leverages Microsoft's global threat intelligence. This provides an enterprise-grade layer of security that constantly evolves to block emerging threats, a significant advantage over many open-source or standard competitor WAF solutions. Because the WAF operates within the managed data plane, it offers several operational benefits: Zero Cluster Footprint: No WAF-specific pods or components are required to run inside your Kubernetes cluster, saving resources for your actual applications. Edge Protection: Security rules and policies are applied at the Azure network edge, ensuring malicious traffic is blocked before it ever reaches your workloads. Automated Maintenance: All rule updates, patching, and engine maintenance are handled entirely by Azure. Centralized Governance: WAF policies can be managed centrally, ensuring consistent security enforcement across multiple teams and namespaces—a critical requirement for regulated environments. TLS and certificate handling TLS termination happens directly at the gateway. HTTPS traffic is decrypted at the edge, inspected, and then forwarded to backend services. Key points: Certificates are referenced from Kubernetes configuration TLS policies are enforced by the Azure-managed gateway Applications receive plain HTTP traffic, keeping workloads simpler This approach allows teams to standardize TLS behavior across clusters and environments, while avoiding certificate logic inside application pods. Network isolation and exposure control Because Application Gateway for Containers runs outside the cluster, it provides a clear security boundary between external traffic and Kubernetes workloads. Common patterns include: Internet-facing gateways with WAF protection Private gateways for internal or zero-trust access Controlled exposure of only selected services By keeping traffic management and security at the gateway layer, clusters remain more isolated and easier to protect. Security by design Overall, the security model follows a simple principle: inspect, protect, and control traffic before it enters the cluster. This reduces the attack surface of Kubernetes, centralizes security controls, and aligns container ingress with Azure’s broader security ecosystem. Scale, Performance, and Limits Azure Application Gateway for Containers is built to handle production-scale traffic without requiring customers to manage capacity, scaling rules, or availability of the ingress layer. Scalability and performance are handled as part of the managed service. Interoperability: The Best of Both Worlds A common hesitation when adopting cloud-native networking is the fear of vendor lock-in. Many organizations worry that using a provider-specific ingress service will tie their application logic too closely to a single cloud’s proprietary configuration. Azure Application Gateway for Containers (AGC) addresses this directly by utilizing the Kubernetes Gateway API as its primary integration model. This creates a powerful decoupling between how you define your traffic and how that traffic is actually delivered. Standardized API, Managed Execution By adopting this model, you gain two critical advantages simultaneously: Zero Vendor Lock-In (Standardized API): Your routing logic is defined using the open-source Kubernetes Gateway API standard. Because HTTPRoute and Gateway resources are community-driven standards, your configuration remains portable and familiar to any Kubernetes professional, regardless of the underlying infrastructure. Zero Operational Overhead (Managed Implementation): While the interface is a standard Kubernetes API, the implementation is a high-performance Azure-managed service. You gain the benefits of an enterprise-grade load balancer—automatic scaling, high availability, and integrated security—without the burden of managing, patching, or troubleshooting proxy pods inside your cluster. The "Pragmatic" Advantage As highlighted in recent architectural discussions, moving from traditional Ingress to the Gateway API is about more than just new features; it’s about interoperability. It allows platform teams to offer a consistent, self-service experience to developers while retaining the ability to leverage the best-in-class performance and security that only a native cloud provider can offer. The result is a future-proof architecture: your teams use the industry-standard language of Kubernetes to describe what they need, and Azure provides the managed muscle to make it happen. Scaling model Application Gateway for Containers uses an automatic scaling model. The gateway data plane scales up or down based on incoming traffic patterns, without manual intervention. From an operator’s perspective: There are no ingress pods to scale No node capacity planning for ingress No separate autoscaler to configure Scaling is handled entirely by Azure, allowing teams to focus on application behavior rather than ingress infrastructure. Performance characteristics Because the data plane runs outside the Kubernetes cluster, ingress traffic does not compete with application workloads for CPU or memory. This often results in: More predictable latency Better isolation between traffic management and application execution Consistent performance under load The service supports common production requirements such as: High concurrent connections Low-latency HTTP and HTTPS traffic Near real-time configuration updates driven by Kubernetes changes Service limits and considerations Like any managed service, Application Gateway for Containers has defined limits that architects should be aware of when designing solutions. These include limits around: Number of listeners and routes Backend service associations Certificates and TLS configurations Throughput and connection scaling thresholds These limits are documented and enforced by the platform to ensure stability and predictable behavior. For most application platforms, these limits are well above typical usage. However, they should be reviewed early when designing large multi-tenant or high-traffic environments. Designing with scale in mind The key takeaway is that Application Gateway for Containers removes ingress scaling from the cluster and turns it into an Azure-managed concern. This simplifies operations and provides a stable, high-performance entry point for container workloads. When to Use (and When Not to Use) Scenario Use it? Why Kubernetes workloads on Azure ✅ Yes The service is designed specifically for container platforms and integrates natively with Kubernetes APIs. Need for managed Layer-7 ingress ✅ Yes Routing, TLS, and scaling are handled by Azure without in-cluster components. Enterprise security requirements (WAF, TLS policies) ✅ Yes Built-in Azure WAF and centralized TLS enforcement simplify security. Platform team managing ingress for multiple apps ✅ Yes Clear separation between platform and application responsibilities. Multi-tenant Kubernetes clusters ✅ Yes Gateway API model supports clean ownership boundaries and isolation. Desire to avoid running ingress controllers in the cluster ✅ Yes No ingress pods, no cluster resource consumption. VM-based or non-container backends ❌ No Classic Application Gateway is a better fit for non-container workloads. Simple, low-traffic test or dev environments ❌ Maybe not A lightweight in-cluster ingress may be simpler and more cost-effective. Need for custom or unsupported L7 features ❌ Maybe not Some advanced or niche ingress features may not yet be available. Non-Kubernetes platforms ❌ No The service is tightly integrated with Kubernetes APIs. When to Choose a Different Path: Azure Container Apps While Application Gateway for Containers provides the ultimate control for Kubernetes environments, not every project requires that level of infrastructure management. For teams that don't need the full flexibility of Kubernetes and are looking for the fastest path to running containers on Azure without managing clusters or ingress infrastructure at all, Azure Container Apps offers a specialized alternative. It provides a fully managed, serverless container platform that handles scaling, ingress, and networking automatically "out of the box". Key Differences at a Glance Feature AGC + Kubernetes Azure Container Apps Control Granular control over cluster and ingress. Fully managed, serverless experience. Management You manage the cluster; Azure manages the gateway. Azure manages both the platform and ingress. Best For Complex, multi-team, or highly regulated environments. Rapid development and simplified operations. Appendix - Routing configuration examples The following examples show how Application Gateway for Containers can be configured using both Gateway API and Ingress API for common routing and TLS scenarios. More examples can be found here, in the detailed documentation. HTTP listener apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: app-route spec: parentRefs: - name: agc-gateway rules: - backendRefs: - name: app-service port: 80 Path routing logic apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: path-routing spec: parentRefs: - name: agc-gateway rules: - matches: - path: type: PathPrefix value: /api backendRefs: - name: api-service port: 80 - backendRefs: - name: web-service port: 80 Weighted canary / rollout apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: canary-route spec: parentRefs: - name: agc-gateway rules: - backendRefs: - name: app-v1 port: 80 weight: 80 - name: app-v2 port: 80 weight: 20 TLS Termination apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: app-ingress spec: ingressClassName: azure-alb-external tls: - hosts: - app.contoso.com secretName: tls-cert rules: - host: app.contoso.com http: paths: - path: / pathType: Prefix backend: service: name: app-service port: number: 80254Views0likes0CommentsRegional Endpoints for Geo-Replicated Azure Container Registries (Private Preview)
Imagine you're running Kubernetes clusters in multiple Azure regions—East US, West Europe, and Southeast Asia. You've configured ACR with geo-replication so your container images are available everywhere, but you've noticed something frustrating: you can't control which replica your clusters pull from. Sometimes your East US cluster pulls from West Europe, and you have no way to pin it to the co-located replica or troubleshoot why routing behaves unexpectedly. This scenario highlights a fundamental challenge with geo-replicated container registries: while Azure-managed routing optimizes for performance, it doesn't provide explicit control for custom failover strategies, troubleshooting, regional affinity, or predictable routing. Regional endpoints solve this by letting you choose exactly which region handles your requests. Background: How Geo-Replication Works Today Geo-replication allows you to maintain copies of your container registry in multiple Azure regions around the world. This means your container images are stored closer to where your applications run, reducing download times and improving reliability. You maintain a single registry name (like myregistry.azurecr.io), and Azure automatically routes your requests to the most suitable replica. The Challenge: Azure-Managed Routing Limitations While geo-replication has been invaluable for global deployments, the automatic routing creates challenges for some customers. When you push or pull images from a geo-replicated registry, Azure-managed routing automatically directs your request to the most suitable replica based on the client's network performance profile. While this Azure-managed routing works well for many scenarios, it creates several challenges for customers with specific requirements: Misrouting Issues: Azure-managed routing may not always select the replica you expect, particularly if network conditions fluctuate or if you're testing specific regional behavior. Geographic Ambiguity: Clients located equidistant from two replicas may experience unpredictable routing as Azure switches between them based on minor network performance variations. Push/Pull Consistency: Images pushed to one replica may be pulled from another during geo-replication synchronization, creating temporary inconsistencies that can impact deployment pipelines. For more details on troubleshooting push operations with geo-replicated registries, see Troubleshoot push operations. Lack of Regional Affinity: Clients may want to establish regional affinity between their applications and a specific replica, but Azure-managed routing doesn't provide a way to maintain this affinity. No Client-Side Failover: Without the ability to target specific replicas, you cannot implement client-side failover strategies or disaster recovery logic that explicitly switches between regions based on your own health checks and business rules. Introducing Regional Endpoints Regional endpoints solve these challenges by providing direct access to specific geo-replicated regions through dedicated login server URLs. Instead of relying solely on the global endpoint (myregistry.azurecr.io) with Azure-managed routing, you can now target specific regional replicas using the pattern: myregistry.<region-name>.geo.azurecr.io For example: myregistry.eastus.geo.azurecr.io myregistry.westeurope.geo.azurecr.io Important: Regional endpoints coexist with a geo-replicated registry's global endpoint at myregistry.azurecr.io. Enabling regional endpoints doesn't disable or replace the global endpoint - you can use both simultaneously. This allows you to use the global endpoint for most operations while selectively using regional endpoints when you need explicit regional control. How It Works Regional endpoints function as login servers—the URL endpoints you use to authenticate and interact with your registry—for specific geo-replicated regions. When you authenticate and interact with a regional endpoint instead of a registry's global endpoint, all your registry operations (authentication, artifact uploads/downloads, repository operations, and metadata actions) go directly to that specific regional replica, bypassing Azure-managed routing entirely. Downloading layer blobs (the actual container image layers) still follows your registry's existing configuration: For registries without Private Endpoints or Dedicated Data Endpoints, layer blob downloads still redirect to Azure storage accounts (*.blob.core.windows.net). For registries with Private Endpoints or Dedicated Data Endpoints enabled, layer blob downloads redirect to the corresponding region's dedicated data endpoint (myregistry.<region-name>.data.azurecr.io). Here's how the architecture compares: Global Endpoint (Azure-Managed Routing): Client → myregistry.azurecr.io (Azure-managed routing) → Geo-Replica with the Best Network Performance Profile ↓ Geo-Replica's Data Endpoint (blob storage or dedicated data endpoint) Regional Endpoint (Customer-Specified Routing): Client → myregistry.<region-name>.geo.azurecr.io (client-managed routing) → Specific Regional Geo-Replica ↓ Geo-Replica's Data Endpoint (blob storage or dedicated data endpoint) Regional vs. Global Endpoints Endpoint Type URL Format Purpose Use Case Global Endpoint myregistry.azurecr.io Login server with Azure-managed routing Default, optimal for most scenarios Regional Endpoint myregistry.<region-name>.geo.azurecr.io Login server for specific regional replica Predictable routing, client-side failover, regional affinity, troubleshooting Dedicated Data Endpoint myregistry.<region-name>.data.azurecr.io Layer blob downloads for Private Endpoint and Dedicated Data Endpoint-enabled registries Automatic blob download redirect from login server Storage Account *.blob.core.windows.net Layer blob downloads for registries without Private Endpoints or Dedicated Data Endpoints Automatic blob download redirect from login server Getting Started with Private Preview Prerequisites To participate in the regional endpoints private preview, you'll need: Premium SKU: Regional endpoints are available exclusively on Premium tier registries Azure CLI: Version 2.74.0 or later for the --regional-endpoints flag API version: The feature is available in all production regions in Azure Public Cloud via the 2026-01-01-preview ACR ARM API version NOTE: During private preview, regional endpoints are only available in Azure Public Cloud. Support for Azure Government, Azure China, and other national clouds will be available in public preview and beyond. NOTE: Regional endpoints can be enabled on any Premium SKU registry, even without geo-replication. A registry without geo-replication has a single geo-replica in the home region, which gets one regional endpoint URL. However, the feature is most useful when your registry has at least two geo-replicas. Step 1: Register the feature flag Register the RegionalEndpoints feature flag for your subscription: az feature register \ --namespace Microsoft.ContainerRegistry \ --name RegionalEndpoints The feature registration is auto-approved and takes approximately 1 hour to propagate. You can check the status with: az feature show \ --namespace Microsoft.ContainerRegistry \ --name RegionalEndpoints Wait until the state shows Registered before proceeding. Step 2: Propagate the registration Once the feature flag shows Registered, propagate the registration to your subscription's resource provider: az provider register -n Microsoft.ContainerRegistry Step 3: Install the preview CLI extension Download the preview CLI extension wheel file from https://aka.ms/acr/regionalendpoints/download and install it: az extension add \ --source acrregionalendpoint-1.0.0b1-py3-none-any.whl \ --allow-preview true What to Expect Once setup is complete, you can: Enable regional endpoints on both new and existing registries Access preview documentation Provide feedback via our GitHub roadmap Technical Deep Dive Enabling Regional Endpoints Enabling regional endpoints is simple and can be done for both new and existing registries: # Enable for new registry az acr create -n myregistry -g myrg -l <region-name> --regional-endpoints enabled --sku Premium # Enable for existing registry az acr update -n myregistry -g myrg --regional-endpoints enabled When you enable regional endpoints, ACR automatically creates login server URLs for all your geo-replicated regions. There's no need to manually configure individual regions - they're all available immediately. Authentication and Pushing/Pulling Images Using regional endpoints follows the same authentication experience as a geo-replicated registry's global endpoint: # Login to a specific regional endpoint az acr login --name myregistry --endpoint eastus # Tag an image with the regional endpoint URL docker tag myapp:v1 myregistry.eastus.geo.azurecr.io/myapp:v1 # Push images to the regional endpoint docker push myregistry.eastus.geo.azurecr.io/myapp:v1 # Pull images from the regional endpoint docker pull myregistry.eastus.geo.azurecr.io/myapp:v1 Regional endpoints support all the same authentication mechanisms as the global endpoint: Microsoft Entra, service principals, managed identities, and admin credentials. Kubernetes Integration One of the most powerful uses of regional endpoints is in Kubernetes deployments. You can specify regional endpoints directly in your deployment manifests, ensuring that Kubernetes clusters in specific regions always pull from their local replica: # East US-based AKS cluster deployment apiVersion: apps/v1 kind: Deployment metadata: name: myapp-eastus spec: template: spec: containers: - name: myapp image: myregistry.eastus.geo.azurecr.io/myapp:v1 --- # West Europe-based AKS cluster deployment apiVersion: apps/v1 kind: Deployment metadata: name: myapp-westeurope spec: template: spec: containers: - name: myapp image: myregistry.westeurope.geo.azurecr.io/myapp:v1 Integration with Dedicated Data Endpoints Regional endpoints work seamlessly with ACR's existing dedicated data endpoints feature. If your registry has dedicated data endpoints enabled, blob downloads from regional endpoints will automatically redirect to the dedicated data endpoints for that region, maintaining all the security benefits of scoped firewall rules without wildcard storage access. Integration with Private Endpoints For registries with Private Endpoints enabled, enabling regional endpoints creates an additional private IP address allocation for each geo-replicated region in all associated virtual networks (VNets). For example, if you have a registry with 3 existing geo-replicas and enable regional endpoints, each VNet with a private endpoint to your registry will consume 3 additional private IPs (one per regional endpoint). Firewall and Network Configuration When using regional endpoints, you'll need to configure your firewall rules to allow access to the specific endpoints you plan to use: # Registry operations using regional endpoints myregistry.<region-name>.geo.azurecr.io # Registry operations using the existing global endpoint for Azure-managed routing myregistry.azurecr.io # Layer blob downloads (choose based on your registry configuration) myregistry.<region-name>.data.azurecr.io # If Private Endpoints or Dedicated Data Endpoints enabled *.blob.core.windows.net # If without Private Endpoints or Dedicated Data Endpoints Related Resources Regional endpoints for geo-replicated registries (Preview) Geo-replication in Azure Container Registry Mitigate data exfiltration with dedicated data endpoints Connect privately to an Azure container registry using Azure Private Link Configure rules to access an Azure container registry behind a firewall220Views0likes0CommentsSimplifying Image Signing with Notary Project and Artifact Signing (GA)
Securing container images is a foundational part of protecting modern cloud‑native applications. Teams need a reliable way to ensure that the images moving through their pipelines are authentic, untampered, and produced by trusted publishers. We’re excited to share an updated approach that combines the Notary Project, the CNCF standard for signing and verifying OCI artifacts, with Artifact Signing—formerly Trusted Signing—which is now generally available as a managed signing service. The Notary Project provides an open, interoperable framework for signing and verification across container images and other OCI artifacts, while Notary Project tools like Notation and Ratify enable enforcement in CI/CD pipelines and Kubernetes environments. Artifact Signing complements this by removing the operational complexity of certificate management through short‑lived certificates, verified Azure identities, and role‑based access control, without changing the underlying standards. If you previously explored container image signing using Trusted Signing, the core workflows remain unchanged. As Artifact Signing reaches GA, customers will see updated terminology across documentation and tooling, while existing Notary Project–based integrations continue to work without disruption. Together, Notary Project and Artifact Signing make it easier for teams to adopt image signing as a scalable platform capability—helping ensure that only trusted artifacts move from build to deployment with confidence. Get started Sign container images using Notation CLI Sign container images in CI/CD pipelines Verify container images in CI/CD pipelines Verify container images in AKS Extend signing and verification to all OCI artifacts in registries Related content Simplifying Code Signing for Windows Apps: Artifact Signing (GA) Simplify Image Signing and Verification with Notary Project (preview article)400Views3likes0CommentsDeploy Dynatrace OneAgent on your Container Apps
TOC Introduction Setup References 1. Introduction Dynatrace OneAgent is an advanced monitoring tool that automatically collects performance data across your entire IT environment. It provides deep visibility into applications, infrastructure, and cloud services, enabling real-time observability. OneAgent supports multiple platforms, including containers, VMs, and serverless architectures, ensuring seamless monitoring with minimal configuration. It captures detailed metrics, traces, and logs, helping teams diagnose performance issues, optimize resources, and enhance user experiences. With AI-driven insights, OneAgent proactively detects anomalies and automates root cause analysis, making it an essential component for modern DevOps, SRE, and cloud-native monitoring strategies. 2. Setup 1. After registering your account, go to the control panel and search for Deploy OneAgent. 2. Obtain your Environment ID and create a PaaS token. Be sure to save them for later use. 3. In your local environment's console, log in to the Dynatrace registry. docker login -u XXX XXX.live.dynatrace.com # XXX is your Environment ID # Input PaaS Token when password prompt 4. Create a Dockerfile and an sshd_config file. FROM mcr.microsoft.com/devcontainers/javascript-node:20 # Change XXX into your Environment ID COPY --from=XXX.live.dynatrace.com/linux/oneagent-codemodules:all / / ENV LD_PRELOAD /opt/dynatrace/oneagent/agent/lib64/liboneagentproc.so # SSH RUN apt-get update \ && apt-get install -y --no-install-recommends dialog openssh-server tzdata screen lrzsz htop cron \ && echo "root:Docker!" | chpasswd \ && mkdir -p /run/sshd \ && chmod 700 /root/.ssh/ \ && chmod 600 /root/.ssh/id_rsa COPY ./sshd_config /etc/ssh/ # OTHER EXPOSE 2222 CMD ["/usr/sbin/sshd", "-D", "-o", "ListenAddress=0.0.0.0"] Port 2222 ListenAddress 0.0.0.0 LoginGraceTime 180 X11Forwarding yes Ciphers aes128-cbc,3des-cbc,aes256-cbc,aes128-ctr,aes192-ctr,aes256-ctr MACs hmac-sha2-256,hmac-sha2-512,hmac-sha1,hmac-sha1-96 StrictModes yes SyslogFacility DAEMON PasswordAuthentication yes PermitEmptyPasswords no PermitRootLogin yes Subsystem sftp internal-sftp AllowTcpForwarding yes 5. Build the container and push it to Azure Container Registry (ACR). # YYY is your ACR name docker build -t oneagent:202503201710 . --no-cache # you could setup your own image name docker tag oneagent:202503201710 YYY.azurecr.io/oneagent:202503201710 docker push YYY.azurecr.io/oneagent:202503201710 6. Create an Azure Container App (ACA), set Ingress to port 3000, allow all inbound traffic, and specify the ACR image you just created. 7. Once the container starts, open a console and run the following command to create a temporary HTTP server simulating a Node.js app. mkdir app && cd app echo 'console.log("Node.js app started...")' > index.js npm init -y npm install express cat <<EOF > server.js const express = require('express'); const app = express(); app.get('/', (req, res) => res.send('hello')); app.listen(3000, () => console.log('Server running on port 3000')); EOF # Please Press Ctrl + C to terminate the next command and run again for 3 times node server.js 8. You should now see the results on the ACA homepage. 9. Go back to the Dynatrace control panel, search for Host Classic, and you should see the collected data. 3. References Integrate OneAgent on Azure App Service for Linux and containers — Dynatrace Docs2.1KViews0likes1CommentAzure Container Registry Repository Permissions with Attribute-based Access Control (ABAC)
General Availability announcement Today marks the general availability of Azure Container Registry (ACR) repository permissions with Microsoft Entra attribute-based access control (ABAC). ABAC augments the familiar Azure RBAC model with namespace and repository-level conditions so platform teams can express least-privilege access at the granularity of specific repositories or entire logical namespaces. This capability is designed for modern multi-tenant platform engineering patterns where a central registry serves many business domains. With ABAC, CI/CD systems and runtime consumers like Azure Kubernetes Service (AKS) clusters have least-privilege access to ACR registries. Why this matters Enterprises are converging on a central container registry pattern that hosts artifacts and container images for multiple business units and application domains. In this model: CI/CD pipelines from different parts of the business push container images and artifacts only to approved namespaces and repositories within a central registry. AKS clusters, Azure Container Apps (ACA), Azure Container Instances (ACI), and consumers pull only from authorized repositories within a central registry. With ABAC, these repository and namespace permission boundaries become explicit and enforceable using standard Microsoft Entra identities and role assignments. This aligns with cloud-native zero trust, supply chain hardening, and least-privilege permissions. What ABAC in ACR means ACR registries now support a registry permissions mode called “RBAC Registry + ABAC Repository Permissions.” Configuring a registry to this mode makes it ABAC-enabled. When a registry is configured to be ABAC-enabled, registry administrators can optionally add ABAC conditions during standard Azure RBAC role assignments. This optional ABAC conditions scope the role assignment’s effect to specific repositories or namespace prefixes. ABAC can be enabled on all new and existing ACR registries across all SKUs, either during registry creation or configured on existing registries. ABAC-enabled built-in roles Once a registry is ABAC-enabled (configured to “RBAC Registry + ABAC Repository Permissions), registry admins can use these ABAC-enabled built-in roles to grant repository-scoped permissions: Container Registry Repository Reader: grants image pull and metadata read permissions, including tag resolution and referrer discoverability. Container Registry Repository Writer: grants Repository Reader permissions, as well as image and tag push permissions. Container Registry Repository Contributor: grants Repository Reader and Writer permissions, as well as image and tag delete permissions. Note that these roles do not grant repository list permissions. The separate Container Registry Repository Catalog Lister must be assigned to grant repository list permissions. The Container Registry Repository Catalog Lister role does not support ABAC conditions; assigning it grants permissions to list all repositories in a registry. Important role behavior changes in ABAC mode When a registry is ABAC-enabled by configuring its permissions mode to “RBAC Registry + ABAC Repository Permissions”: Legacy data-plane roles such as AcrPull, AcrPush, AcrDelete are not honored in ABAC-enabled registries. For ABAC-enabled registries, use the ABAC-enabled built-in roles listed above. Broad roles like Owner, Contributor, and Reader previously granted full control plane and data plane permissions, which is typically an overprivileged role assignment. In ABAC-enabled registries, these broad roles will only grant control plane permissions to the registry. They will no longer grant data plane permissions, such as image push, pull, delete or repository list permissions. ACR Tasks, Quick Tasks, Quick Builds, and Quick Runs no longer inherit default data-plane access to source registries; assign the ABAC-enabled roles above to the calling identity as needed. Identities you can assign ACR ABAC uses standard Microsoft Entra role assignments. Assign RBAC roles with optional ABAC conditions to users, groups, service principals, and managed identities, including AKS kubelet and workload identities, ACA and ACI identities, and more. Next steps Start using ABAC repository permissions in ACR to enforce least-privilege artifact push, pull, and delete boundaries across your CI/CD systems and container image workloads. This model is now the recommended approach for multi-tenant platform engineering patterns and central registry deployments. To get started, follow the step-by-step guides in the official ACR ABAC documentation: https://aka.ms/acr/auth/abac1.3KViews1like0CommentsTransforming Enterprise AKS: Multi-Tenancy at Scale with Agentic AI and Semantic Kernel
In this post, I’ll show how you can deploy an AI Agent on Azure Kubernetes Service (AKS) using a multi-tenant approach that maximizes both security and cost efficiency. By isolating each tenant’s agent instance within the cluster and ensuring that every agent has access only to its designated Azure Blob Storage container, cross-tenant data leakage risks are eliminated. This model allows you to allocate compute and storage resources per tenant, optimizing usage and spending while maintaining strong data segregation and operational flexibility—key requirements for scalable, enterprise-grade AI applications.Announcing Azure Command Launcher for Java
Optimizing JVM Configuration for Azure Deployments Tuning the Java Virtual Machine (JVM) for cloud deployments is notoriously challenging. Over 30% of developers tend to deploy Java workloads with no JVM configuration at all, therefore relying on the default settings of the HotSpot JVM. The default settings in OpenJDK are intentionally conservative, designed to work across a wide range of environments and scenarios. However, these defaults often lead to suboptimal resource utilization in cloud-based deployments, where memory and CPU tend to be dedicated for application workloads (use of containers and VMs) but still require intelligent management to maximize efficiency and cost-effectiveness. To address this, we are excited to introduce jaz, a new JVM launcher optimized specifically for Azure. jaz provides better default ergonomics for Java applications running in containers and virtual machines, ensuring a more efficient use of resources right from the start, and leverages advanced JVM features automatically, such as AppCDS and in the future, Project Leyden. Why jaz? Conservative Defaults Lead to Underutilization of Resources When deploying Java applications to the cloud, developers often need to fine-tune JVM parameters such as heap size, garbage collection strategies, and other tuning configurations to achieve better resource utilization and potentially higher performance. The default OpenJDK settings, while safe, do not take full advantage of available resources in cloud environments, leading to unnecessary waste and increased operational costs. While advancements in dynamic heap sizing are underway by Oracle, Google, and Microsoft, they are still in development and will be available primarily in future major releases of OpenJDK. In the meantime, developers running applications on current and older JDK versions (such as OpenJDK 8, 11, 17, and 21) still need to optimize their configurations manually or rely on external tools like Paketo Buildpacks, which automate tuning but may not be suitable for all use cases. With jaz, we are providing a smarter starting point for Java applications on Azure, with default configurations designed for cloud environments. The jaz launcher helps by: Optimizing resource utilization: By setting JVM parameters tailored for cloud deployments, jaz reduces wasted memory and CPU cycles. Improve first-deploy performance: New applications often require trial and error to find the right JVM settings. jaz increases the likelihood of better performance on first deployment. Enhance cost efficiency: By making better use of available resources, applications using jaz can reduce unnecessary cloud costs. This tool is ideal for developers who: Want better JVM defaults without diving deep into tuning guides Develop and deploy cloud native microservices with Spring Boot, Quarkus, or Micronaut Prefer container-based workflows such as Kubernetes and OpenShift Deploy Java workloads on Azure Container Apps, Azure Kubernetes Service, Azure Red Hat OpenShift, or Azure VMs How jaz works? jaz sits between your container startup command and the JVM. It will: Detect the cloud environment (e.g., container limits, available memory) Analyzes the workload type and selects best-fit JVM options Launches the Java process with optimized flags, such as: Heap sizing GC selection and tuning Logging and diagnostics settings as needed Example Usage Instead of this: $ JAVA_OPTS="-XX:... several JVM tuning flags" $ java $JAVA_OPTS -jar myapp.jar" Use: $ jaz -jar myapp.jar You will automatically benefit from: Battle-tested defaults for cloud native and container workloads Reduced memory waste Better startup and warmup performance No manual tuning required How to Access jaz (Private Preview) jaz is currently available through a Private Preview. During this phase, we are working closely with selected customers to refine the experience and gather feedback. To request access: 👉 Submit your interest here Participants in the Private Preview will receive access to jaz via easily installed standalone Linux packages for container images of the Microsoft Build of OpenJDK and Eclipse Temurin (for Java 8). Customers will have direct communication with our engineering and product teams to further enhance the tool to fit their needs. For a sneak peek, you can read the documentation. Our Roadmap Our long-term vision for jaz includes adaptive JVM configuration based on telemetry and usage patterns, helping developers achieve optimal performance across all Azure services. ⚙️ JVM Configuration Profiles 📦 AppCDS Support 📦 Leyden Support 🔄 Continuous Tuning 📊 Share telemetry through Prometheus We’re excited to work with the Java community to shape this tool. Your feedback will be critical in helping us deliver a smarter, cloud-native Java runtime experience on Azure.536Views0likes0CommentsBuilding the Agentic Future
As a business built by developers, for developers, Microsoft has spent decades making it faster, easier and more exciting to create great software. And developers everywhere have turned everything from BASIC and the .NET Framework, to Azure, VS Code, GitHub and more into the digital world we all live in today. But nothing compares to what’s on the horizon as agentic AI redefines both how we build and the apps we’re building. In fact, the promise of agentic AI is so strong that market forecasts predict we’re on track to reach 1.3 billion AI Agents by 2028. Our own data, from 1,500 organizations around the world, shows agent capabilities have jumped as a driver for AI applications from near last to a top three priority when comparing deployments earlier this year to applications being defined today. Of those organizations building AI agents, 41% chose Microsoft to build and run their solutions, significantly more than any other vendor. But within software development the opportunity is even greater, with approximately 50% of businesses intending to incorporate agentic AI into software engineering this year alone. Developers face a fascinating yet challenging world of complex agent workflows, a constant pipeline of new models, new security and governance requirements, and the continued pressure to deliver value from AI, fast, all while contending with decades of legacy applications and technical debt. This week at Microsoft Build, you can see how we’re making this future a reality with new AI-native developer practices and experiences, by extending the value of AI across the entire software lifecycle, and by bringing critical AI, data, and toolchain services directly to the hands of developers, in the most popular developer tools in the world. Agentic DevOps AI has already transformed the way we code, with 15 million developers using GitHub Copilot today to build faster. But coding is only a fraction of the developer’s time. Extending agents across the entire software lifecycle, means developers can move faster from idea to production, boost code quality, and strengthen security, while removing the burden of low value, routine, time consuming tasks. We can even address decades of technical debt and keep apps running smoothly in production. This is the foundation of agentic DevOps—the next evolution of DevOps, reimagined for a world where intelligent agents collaborate with developer teams and with each other. Agents introduced today across GitHub Copilot and Azure operate like a member of your development team, automating and optimizing every stage of the software lifecycle, from performing code reviews, and writing tests to fixing defects and building entire specs. Copilot can even collaborate with other agents to complete complex tasks like resolving production issues. Developers stay at the center of innovation, orchestrating agents for the mundane while focusing their energy on the work that matters most. Customers like EY are already seeing the impact: “The coding agent in GitHub Copilot is opening up doors for each developer to have their own team, all working in parallel to amplify their work. Now we're able to assign tasks that would typically detract from deeper, more complex work, freeing up several hours for focus time." - James Zabinski, DevEx Lead at EY You can learn more about agentic DevOps and the new capabilities announced today from Amanda Silver, Corporate Vice President of Product, Microsoft Developer Division, and Mario Rodriguez, Chief Product Office at GitHub. And be sure to read more from GitHub CEO Thomas Dohmke about the latest with GitHub Copilot. At Microsoft Build, see agentic DevOps in action in the following sessions, available both in-person May 19 - 22 in Seattle and on-demand: BRK100: Reimagining Software Development and DevOps with Agentic AI BRK 113: The Agent Awakens: Collaborative Development with GitHub Copilot BRK118: Accelerate Azure Development with GitHub Copilot, VS Code & AI BRK131: Java App Modernization Simplified with AI BRK102: Agent Mode in Action: AI Coding with Vibe and Spec-Driven Flows BRK101: The Future of .NET App Modernization Streamlined with AI New AI Toolchain Integrations Beyond these new agentic capabilities, we’re also releasing new integrations that bring key services directly to the tools developers are already using. From the 150 million GitHub users to the 50 million monthly users of the VS Code family, we’re making it easier for developers everywhere to build AI apps. If GitHub Copilot changed how we write code, Azure AI Foundry is changing what we can build. And the combination of the two is incredibly powerful. Now we’re bringing leading models from Azure AI Foundry directly into your GitHub experience and workflow, with a new native integration. GitHub models lets you experiment with leading models from OpenAI, Meta, Cohere, Microsoft, Mistral and more. Test and compare performance while building models directly into your codebase all within in GitHub. You can easily select the best model performance and price side by side and swap models with a simple, unified API. And keeping with our enterprise commitment, teams can set guardrails so model selection is secure, responsible, and in line with your team’s policies. Meanwhile, new Azure Native Integrations gives developers seamless access to a curated set of 20 software services from DataDog, New Relic, Pinecone, Pure Storage Cloud and more, directly through Azure portal, SDK, and CLI. With Azure Native Integrations, developers get the flexibility to work with their preferred vendors across the AI toolchain with simplified single sign-on and management, while staying in Azure. Today, we are pleased to announce the addition of even more developer services: Arize AI: Arize’s platform provides essential tooling for AI and agent evaluation, experimentation, and observability at scale. With Arize, developers can easily optimize AI applications through tools for tracing, prompt engineering, dataset curation, and automated evaluations. Learn more. LambdaTest HyperExecute: LambdaTest HyperExecute is an AI-native test execution platform designed to accelerate software testing. It enables developers and testers to run tests up to 70% faster than traditional cloud grids by optimizing test orchestration, observability and streamlining TestOps to expedite release cycles. Learn more. Mistral: Mistral and Microsoft announced a partnership today, which includes integrating Mistral La Plateforme as part of Azure Native Integrations. Mistral La Plateforme provides pay-as-you-go API access to Mistral AI's latest large language models for text generation, embeddings, and function calling. Developers can use this AI platform to build AI-powered applications with retrieval-augmented generation (RAG), fine-tune models for domain-specific tasks, and integrate AI agents into enterprise workflows. MongoDB (Public Preview): MongoDB Atlas is a fully managed cloud database that provides scalability, security, and multi-cloud support for modern applications. Developers can use it to store and search vector embeddings, implement retrieval-augmented generation (RAG), and build AI-powered search and recommendation systems. Learn more. Neon: Neon Serverless Postgres is a fully managed, autoscaling PostgreSQL database designed for instant provisioning, cost efficiency, and AI-native workloads. Developers can use it to rapidly spin up databases for AI agents, store vector embeddings with pgvector, and scale AI applications seamlessly. Learn more. Java and .Net App Modernization Shipping to production isn’t the finish line—and maintaining legacy code shouldn’t slow you down. Today we’re announcing comprehensive resources to help you successfully plan and execute app modernization initiatives, along with new agents in GitHub Copilot to help you modernize at scale, in a fraction of the time. In fact, customers like Ford China are seeing breakthrough results, reducing up to 70% of their Java migration efforts by using GitHub Copilot to automate middleware code migration tasks. Microsoft’s App Modernization Guidance applies decades of enterprise apps experience to help you analyze production apps and prioritize modernization efforts, while applying best practices and technical patterns to ensure success. And now GitHub Copilot transforms the modernization process, handling code assessments, dependency updates, and remediation across your production Java and .NET apps (support for mainframe environments is coming soon!). It generates and executes update plans automatically, while giving you full visibility, control, and a clear summary of changes. You can even raise modernization tasks in GitHub Issues from our proven service Azure Migrate to assign to developer teams. Your apps are more secure, maintainable, and cost-efficient, faster than ever. Learn how we’re reimagining app modernization for the era of AI with the new App Modernization Guidance and the modernization agent in GitHub Copilot to help you modernize your complete app estate. Scaling AI Apps and Agents Sophisticated apps and agents need an equally powerful runtime. And today we’re advancing our complete portfolio, from serverless with Azure Functions and Azure Container Apps, to the control and scale of Azure Kubernetes Service. At Build we’re simplifying how you deploy, test, and operate open-source and custom models on Kubernetes through Kubernetes AI Toolchain Operator (KAITO), making it easy to inference AI models with the flexibility, auto-scaling, pay-per-second pricing, and governance of Azure Container Apps serverless GPU, helping you create real-time, event-driven workflows for AI agents by integrating Azure Functions with Azure AI Foundry Agent Service, and much, much more. The platform you choose to scale your apps has never been more important. With new integrations with Azure AI Foundry, advanced automation that reduces developer overhead, and simplified operations, security and governance, Azure’s app platform can help you deliver the sophisticated, secure AI apps your business demands. To see the full slate of innovations across the app platform, check out: Powering the Next Generation of AI Apps and Agents on the Azure Application Platform Tools that keep pace with how you need to build This week we’re also introducing new enhancements to our tooling to help you build as fast as possible and explore what’s next with AI, all directly from your editor. GitHub Copilot for Azure brings Azure-specific tools into agent mode in VS Code, keeping you in the flow as you create, manage, and troubleshoot cloud apps. Meanwhile the Azure Tools for VS Code extension pack brings everything you need to build apps on Azure using GitHub Copilot to VS Code, making it easy to discover and interact with cloud services that power your applications. Microsoft’s gallery of AI App Templates continues to expand, helping you rapidly move from concept to production app, deployed on Azure. Each template includes fully working applications, complete with app code, AI features, infrastructure as code (IaC), configurable CI/CD pipelines with GitHub Actions, along with an application architecture, ready to deploy to Azure. These templates reflect the most common patterns and use cases we see across our AI customers, from getting started with AI agents to building GenAI chat experiences with your enterprise data and helping you learn how to use best practices such as keyless authentication. Learn more by reading the latest on Build Apps and Agents with Visual Studio Code and Azure Building the agentic future The emergence of agentic DevOps, the new wave of development powered by GitHub Copilot and new services launching across Microsoft Build will be transformative. But just as we’ve seen over the first 50 years of Microsoft’s history, the real impact will come from the global community of developers. You all have the power to turn these tools and platforms into advanced AI apps and agents that make every business move faster, operate more intelligently and innovate in ways that were previously impossible. Learn more and get started with GitHub Copilot2.6KViews2likes0CommentsLeveraging Azure Container Apps Labels for Environment-based Routing and Feature Testing
Azure Container Apps offers a powerful feature through labels and traffic splitting that can help developers easily manage multiple versions of an app, route traffic based on different environments, and enable controlled feature testing without disrupting live users. In this blog, we'll walk through a practical scenario where we deploy an experimental feature in a staging revision, test it with internal developers, and then switch the feature to production once it’s validated. We'll use Azure Container Apps labels and traffic splitting to achieve this seamless deployment process.1.6KViews5likes1CommentNew Features in Azure Container Apps VS Code extension
👆 Install VS Code extension Summary of Major Changes New Managed Identity Support for connecting container apps to container registries. This is now the preferred method for securing these resources, provided you have sufficient privileges. New Container View: Introduced with several commands for easier editing of container images and environment variables. One-Click Deployment: Deploy to Container App... added to the top-level container app node. This supports deployments from a workspace project or container registry. To manage multiple applications in a workspace project or enable faster deployments with saved settings, use Deploy Project from Workspace. It can be accessed via the workspace view. Improved Activity Log Output: All major commands now include improved activity log outputs, making it easier to track and manage your activities. Quickstart Image for Container App Creation: The "Create container app..." command now initializes with a quickstart image, simplifying the setup process. New Commands and Enhancements Managed Identity support for new connections to container registries New command Deploy to Container App... found on the container app item. This one-click deploy command allows deploying from a workspace project or container registry while in single revision mode. New Container view under the container app item allows direct access to the container's image and environment variables. New command Edit Container Image... allows editing of container images without prompting to update environment variables. Environment Variable CRUD Commands: Multiple new commands for creating, reading, updating, and deleting environment variables. Convert Environment Variable to Secret: Quickly turn an environment variable into a container app secret with this new command. Changes and Improvements Command Create Container App... now always starts with a quickstart image. Renamed the Update Container Image... command to Edit Container.... This command is now found on the container item. When running Deploy Project from Workspace..., if remote environment variables conflict with saved settings, prompt for update. Add new envPath option useRemoteConfiguration. Deploying an image with the Docker extension now allows targeting specific revisions and containers. When deploying a new image to a container app, only show ingress prompt when more than the image tag is changed. Improved ACR selection dropdowns, providing better pick recommendations and sorting by resource group. Improved activity log outputs for major commands. Changed draft deploy prompt to be a quick pick instead of a pop-up window. We hope these new features and improvements will simplify deployments and make your Azure Container Apps experience even better. Stay tuned for more updates, and as always, we appreciate your feedback! Try out these new features today and let us know what you think! Your feedback is invaluable in helping us continue to improve and innovate. Azure Container Apps VS Code Extension Full changelog:759Views1like0Comments