updates
20 TopicsMicrosoft Azure Cloud HSM is now generally available
Microsoft Azure Cloud HSM is now generally available. Azure Cloud HSM is a highly available, FIPS 140-3 Level 3 validated single-tenant hardware security module (HSM) service designed to meet the highest security and compliance standards. With full administrative control over their HSM, customers can securely manage cryptographic keys and perform cryptographic operations within their own dedicated Cloud HSM cluster. In today’s digital landscape, organizations face an unprecedented volume of cyber threats, data breaches, and regulatory pressures. At the heart of securing sensitive information lies a robust key management and encryption strategy, which ensures that data remains confidential, tamper-proof, and accessible only to authorized users. However, encryption alone is not enough. How cryptographic keys are managed determines the true strength of security. Every interaction in the digital world from processing financial transactions, securing applications like PKI, database encryption, document signing to securing cloud workloads and authenticating users relies on cryptographic keys. A poorly managed key is a security risk waiting to happen. Without a clear key management strategy, organizations face challenges such as data exposure, regulatory non-compliance and operational complexity. An HSM is a cornerstone of a strong key management strategy, providing physical and logical security to safeguard cryptographic keys. HSMs are purpose-built devices designed to generate, store, and manage encryption keys in a tamper-resistant environment, ensuring that even in the event of a data breach, protected data remains unreadable. As cyber threats evolve, organizations must take a proactive approach to securing data with enterprise-grade encryption and key management solutions. Microsoft Azure Cloud HSM empowers businesses to meet these challenges head-on, ensuring that security, compliance, and trust remain non-negotiable priorities in the digital age. Key Features of Azure Cloud HSM Azure Cloud HSM ensures high availability and redundancy by automatically clustering multiple HSMs and synchronizing cryptographic data across three instances, eliminating the need for complex configurations. It optimizes performance through load balancing of cryptographic operations, reducing latency. Periodic backups enhance security by safeguarding cryptographic assets and enabling seamless recovery. Designed to meet FIPS 140-3 Level 3, it provides robust security for enterprise applications. Ideal use cases for Azure Cloud HSM Azure Cloud HSM is ideal for organizations migrating security-sensitive applications from on-premises to Azure Virtual Machines or transitioning from Azure Dedicated HSM or AWS Cloud HSM to a fully managed Azure-native solution. It supports applications requiring PKCS#11, OpenSSL, and JCE for seamless cryptographic integration and enables running shrink-wrapped software like Apache/Nginx SSL Offload, Microsoft SQL Server/Oracle TDE, and ADCS on Azure VMs. Additionally, it supports tools and applications that require document and code signing. Get started with Azure Cloud HSM Ready to deploy Azure Cloud HSM? Learn more and start building today: Get Started Deploying Azure Cloud HSM Customers can download the Azure Cloud HSM SDK and Client Tools from GitHub: Microsoft Azure Cloud HSM SDK Stay tuned for further updates as we continue to enhance Microsoft Azure Cloud HSM to support your most demanding security and compliance needs.6.6KViews3likes2CommentsEnhancing Azure Private DNS Resiliency with Internet Fallback
Is your Azure environment prone to DNS resolution hiccups, especially when leveraging Private Link and multiple virtual networks? Dive into our latest blog post, "Enhancing Azure Private DNS Resiliency with Internet Fallback," and discover how to eliminate those frustrating NXDOMAIN errors and ensure seamless application availability. I break down the common challenges faced in complex Azure setups, including isolated Private DNS zones and hybrid environments, and reveal how the new internet fallback feature acts as a vital safety net. Learn how this powerful tool automatically switches to public DNS resolution when private resolution fails, minimizing downtime and simplifying management. Our tutorial walks you through the easy steps to enable internet fallback, empowering you to fortify your Azure networks and enhance application resilience. Whether you're dealing with multi-tenant deployments or intricate service dependencies, this feature is your key to uninterrupted connectivity. Don't let DNS resolution issues disrupt your operations. Read the full article to learn how to implement Azure Private DNS internet fallback and ensure your applications stay online, no matter what.1.8KViews2likes2CommentsProvisioning Azure Storage Containers and Folders Using Bicep and PowerShell
Overview: This blog demonstrates how to: Deploy an Azure Storage Account and Blob containers using Bicep Create a folder-like structure inside those containers using PowerShell This approach is ideal for cloud engineers and DevOps professionals seeking end-to-end automation for structured storage provisioning. Bicep natively supports the creation of: Storage Accounts Containers (via the blobServices resource) However, folders (directories) inside containers are not first-class resources in ARM/Bicep — they're created by uploading a blob with a virtual path, e.g., folder1/blob.txt. So how can we automate the creation of these folder structures without manually uploading dummy blobs? You can check out the blog "Designing Reusable Bicep Modules: A Databricks Example" for a good reference on how to structure the storage account pattern. It covers reusable module design and shows how to keep things clean and consistent. 1. Deploy an Azure Storage Account and Blob containers using Bicep You can provision a Storage Account and its associated Blob Containers using a few lines of code. and the parameters for 'directory services' be like, The process involves: Defining the Microsoft.Storage/storageAccounts resource for the Storage Account. Adding a nested blobServices/containers resource to create Blob containers within it. Using parameters to dynamically assign names, access tiers, and network rules. 2. Create a folder-like structure inside those containers using PowerShell To simulate a directory structure in Azure Data Lake Storage Gen2, use Bicep with deployment scripts that execute az storage fs directory create. This enables automation of folder creation inside blob containers at deployment time. In this setup: A Microsoft.Resources/deploymentScripts resource is used. The az storage fs directory create command creates virtual folders inside containers. Access is authenticated using a secure account key fetched via storageAccount.listKeys() Parameter Flow and Integration The solution uses Bicep’s module linking capabilities: The main module outputs the Storage Account name and container names. These outputs are passed as parameters to the deployment script module. The script loops through each container and folder, uploading a dummy blob to create the folder. Here’s the final setup with the storage account and container ready, plus the directory created inside—everything’s all set! Conclusion This approach is especially useful in enterprise environments where storage structures must be provisioned consistently across environments. You can extend this pattern further to: Tag blobs/folders Assign RBAC roles Handle folder-level metadata Have you faced similar challenges with Azure Storage provisioning? Share your experience or drop a comment!756Views0likes0CommentsDesigning Reusable Bicep Modules: A Databricks Example
In this blog, I’ll walk you through how to design a reusable Bicep module for deploying Azure Databricks, a popular analytics and machine learning platform. We'll focus on creating a parameterized and composable pattern using Bicep and Azure Verified Modules (AVM), enabling your team to replicate this setup across environments with minimal changes. Why Reusability in Bicep Matters: As your Azure environment scales, manually copying and modifying Bicep files for every service or environment becomes error-prone and unmanageable. Reusable Bicep modules help: Eliminate redundant code Enforce naming, tagging, and networking standards Accelerate onboarding of new services or teams Enable self-service infrastructure in CI/CD pipelines Here, we’ll create a reusable module to deploy an Azure Databricks Workspace with: Consistent naming conventions Virtual network injection (VNet) Private endpoint integration (UI, Blob, DFS) Optional DNS zone configuration Role assignments AVM module integration Module Inputs (Parameters) Your Bicep pattern uses several key parameters: Parameterizing the Pattern These parameters allow the module to be flexible yet consistent across multiple environments. Naming conventions: The nameObject structure is used to build consistent names for all resources: var adbworkspaceName = toLower('${nameObject.client}-${nameObject.workloadIdentifier}-${nameObject.environment}-${nameObject.region}-adb-${nameObject.suffix}') Configuring Private Endpoints and DNS The module allows defining private endpoint configurations for both Databricks and storage: This logic ensures: Private access to the Databricks UI Optional DNS zone integration for custom resolution You can extend this to include Blob and DFS storage private endpoints, which are essential for secure data lake integrations. Plugging in the AVM Module: The actual deployment leverages an Azure Verified Module (AVM) stored in an Azure Container Registry (ACR): Example usage: Using the module in your main Bicep deployment: Conclusion This Bicep-based or any reusable modules or patterns enable consistent, secure, and scalable deployments across your Azure environments. Whether you're deploying a single workspace or rolling out 50 across environments, this pattern helps ensure governance and simplicity. Resources Azure Bicep Documentation Azure Verified Modules Azure Databricks Docs352Views1like0CommentsDeploying a GitLab Runner on Azure: A Step-by-Step Guide
This guide walks you through the entire process — from VM setup to running your first successful job. Step 1: Create an Azure VM Log in to the Azure Portal. Create a new VM with the following settings: Image: Ubuntu 20.04 LTS (recommended) Authentication: SSH Public Key (generate a .pem file for secure access) Once created, note the public IP address. Connect to the VM From your terminal: ssh -i "/path/to/your/key.pem" admin_name@<YOUR_VM_PUBLIC_IP> Note: Make sure to replace the above command with path to .pem file and admin name which you would have given during VM deployment. Step 2: Install Docker on the Azure VM Run the following commands to install Docker: sudo apt update && sudo apt upgrade -y sudo apt install -y docker.io sudo systemctl start docker sudo systemctl enable docker #Enable Docker to start automatically on boot sudo usermod -aG docker $USER Test Docker with: docker run hello-world A success message should appear. If you see permission denied, run: newgrp docker Note: Log out and log back in (or restart the VM) for group changes to apply. Step 3: Install GitLab Runner Download the GitLab Runner binary: Assign execution permissions: Install and start the runner as a service: #Step1 sudo chmod +x /usr/local/bin/gitlab-runner #Step2 sudo curl -L --output /usr/local/bin/gitlab-runner \ https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-linux-amd64 #Step3 sudo gitlab-runner install --user=azureuser sudo gitlab-runner start sudo systemctl enable gitlab-runner #Enable GitLab Runner to start automatically on boot Step 4: Register the GitLab Runner Navigate to runner section on your Gitlab to generate registration token (Gitlab -> Settings -> CI/CD -> Runners -> New Project Runner) On your Azure VM, run: sudo gitlab-runner register \ --url https://gitlab.com/ \ --registration-token <YOUR_TOKEN> \ --executor docker \ --docker-image Ubuntu:22.04 \ --description "Azure VM Runner" \ --tag-list "gitlab-runner-vm" \ --non-interactive Note: Replace the registration toke, description, tag-list as required After registration, restart the runner: sudo gitlab-runner restart Verify the runner’s status with: sudo gitlab-runner list Your runner should appear in the list. If runner does not appear, make sure to follow step 4 as described. Step 5: Add Runner Tags to Your Pipeline In .gitlab-ci.yml default: tags: - gitlab-runner-vm Step 6: Verify Pipeline Execution Create a simple job to test the runner: test-runner: tags: - gitlab-runner-vm script: - echo "Runner is working!" Troubleshooting Common Issues Permission Denied (Docker Error) Error: docker: permission denied while trying to connect to the Docker daemon socket Solution: Run newgrp docker If unresolved, restart Docker: sudo systemctl restart docker No Active Runners Online Error: This job is stuck because there are no active runners online. Solution: Check runner status: sudo gitlab-runner status If inactive, restart the runner: sudo gitlab-runner restart Ensure your runner tag in the pipelines matches the one you provided while creating runner for project Final Tips Always restart the runner after making configuration changes: sudo gitlab-runner restart Remember to periodically check the runner’s status and update its configuration as needed to keep it running smoothly. Happy coding and enjoy the enhanced capabilities of your new GitLab Runner setup!1.6KViews2likes2CommentsCreating an Application Landing Zone on Azure Using Bicep
🧩 What Is an Application Landing Zone? An Application Landing Zone is a pre-configured Azure environment designed to host applications in a secure, scalable, and governed manner. It is a foundational component of the Azure Landing Zones framework, which supports enterprise-scale cloud adoption by providing a consistent and governed environment for deploying workloads. 🔍 Key Characteristics Security and Compliance Built-in policies and controls ensure that applications meet organizational and regulatory requirements. Pre-configured Infrastructure Includes networking, identity, security, monitoring, and governance components that are ready to support application workloads Scalability and Flexibility Designed to scale with application demand, supporting both monolithic and microservices-based architectures. Governance and Management Integrated with Azure Policy, Azure Monitor, and Azure Security Center to enforce governance and provide operational insights. Developer Enablement Provides a consistent environment that accelerates development and deployment cycles. 🏗️ Core Components An Application Landing Zone typically includes: Networking with Virtual Networks (VNets) with subnets and NSGs Azure Active Directory (AAD) integration Role-Based Access Control (RBAC) Azure Key Vault/Managed HSM for secrets management Monitoring and Logging via Azure Monitor and Log Analytics Application Gateway or Azure Front Door for traffic management CI/CD Pipelines integrated with Azure DevOps or GitHub Actions 🛠️ Prerequisites Before deploying the Application Landing Zone, please ensure the following: ✅ Access & Identity Azure Subscription Access: You must have access to an active Azure subscription where the landing zone will be provisioned. This subscription should be part of a broader management group hierarchy if you're following enterprise scale landing zone patterns. A Service Principal (SPN): A Service Principal is required for automating deployments via CI/CD pipelines or Infrastructure as Code (IaC) tools. It should have an atleast Contributor role at the subscription level to create and manage resources. Explicit access to the following is required: - Resource Groups (for deploying application components) - Azure Policy (to assign and manage governance rules) - Azure Key Vault (to retrieve secrets, certificates, or credentials) Azure Active Directory (AAD) Ensure that AAD is properly configured for: - Role-Based Access Control (RBAC) - Group-based access assignments - Conditional Access policies (if applicable) Tip: Use Managed Identities where possible to reduce the need for credential management. ✅ Tooling Azure CLI - Required for scripting and executing deployment commands. - Ensure you're authenticated using az login or a service principal. - Recommended version: 2.55.0 or later for compatibility with Bicep latest Azure features Azure PowerShell - Installed and authenticated (Connect-AzAccount) - Recommended module: Az module version 11.0.0 or later Visual Studio Code Preferred IDE for working with Bicep and ARM templates. - Install the following extensions: - Bicep: for authoring and validating infrastructure templates. - Azure Account: for managing Azure sessions and subscriptions. Source Control & CI/CD Integration Access to GitHub or Azure DevOps is required for: - Storing IaC templates - Automating deployments via pipelines - Managing version control and collaboration � Tip: Use GitHub Actions or Azure Pipelines to automate validation, testing, and deployment of your landing zone templates. ✅ Environment Setup Resource Naming Conventions Define a naming standard that reflects resource type, environment, region, and application. Example: rg-app1-prod-weu for a production resource group in West Europe. Tagging Strategy Predefine tags for: - Cost Management (e.g., CostCenter, Project) - Ownership (e.g., Owner, Team) - Environment (e.g., Dev, Test, Prod) Networking Baseline Ensure that required VNets, subnets, and DNS settings are in place. Plan for hybrid connectivity if integrating with on-premises networks (e.g., via VPN or ExpressRoute). Security Baseline Define and apply: - RBAC roles for least-privilege access - Azure built-in as well as custom Policies for compliance enforcement - NSGs and ASGs for network security 🧱 Application Landing Zone Architecture Using Bicep Bicep is a domain-specific language (DSL) for deploying Azure resources declaratively. It simplifies the authoring experience compared to traditional ARM templates and supports modular, reusable, and maintainable infrastructure-as-code (IaC) practices. The Application Landing Zone (App LZ) architecture leverages Bicep to define and deploy a secure, scalable, and governed environment for hosting applications. This architecture is structured into phases, each representing a logical grouping of resources. These phases align with enterprise cloud adoption frameworks and enable teams to deploy infrastructure incrementally and consistently. 🧱 Architectural Phases The App LZ is typically divided into the following phases, each implemented using modular Bicep templates: 1. Foundation Phase Establishes the core infrastructure and governance baseline: Resource groups Virtual networks and subnets Network security groups (NSGs) Diagnostic settings Azure Policy assignments 2. Identity & Access Phase Implements secure access and identity controls: Role-Based Access Control (RBAC) Azure Active Directory (AAD) integration Managed identities Key Vault access policies 3. Security & Monitoring Phase Ensures observability and compliance: Azure Monitor and Log Analytics Security Center configuration Alerts and action groups Defender for Cloud settings 4. Application Infrastructure Phase Deploys application-specific resources: App Services, AKS, or Function Apps Application Gateway or Azure Front Door Storage accounts, databases, and messaging services Private endpoints and service integrations 5. CI/CD Integration Phase Automates deployment and lifecycle management: GitHub Actions or Azure Pipelines Deployment scripts and parameter files Secrets management via Key Vault Environment-specific configurations 🔁 Modular Bicep Templates Each phase is implemented using modular Bicep templates, which offer: Reusability: Templates can be reused across environments (Dev, Test, Prod). Flexibility: Parameters allow customization without modifying core logic. Incremental Deployment: Phases can be deployed independently or chained together. Testability: Each module can be validated against test cases before full deployment. 💡 Example: A network.bicep module can be reused across multiple landing zones with different subnet configurations. To ensure a smooth and automated deployment experience, here’s the complete flow from setup: ✅ Benefits of This Approach Consistency & Compliance: Enforces Azure best practices and governance policies Modularity: Reusable Bicep modules simplify maintenance and scaling Automation: CI/CD pipelines reduce manual effort and errors Security: Aligns with Microsoft’s security baselines and CAF Scalability: Easily extendable to support new workloads or environments Native Azure Integration: Supports all Azure resources and features. Tooling Support: Integrated with Visual Studio Code, Azure CLI, and GitHub. 🔄 Why Choose Bicep Over Terraform? First-Party Integration: Bicep is a first-party solution maintained by Microsoft, ensuring day-one support for new Azure services and API changes. This means customers can immediately leverage the latest features and updates without waiting for third-party providers to catch up. Azure-Specific Optimization: Bicep is deeply integrated with Azure services, offering a tailored experience for Azure resource management. This integration ensures that deployments are optimized for Azure, providing better performance and reliability. Simplified Syntax: Bicep uses a domain-specific language (DSL) that is more concise and easier to read compared to Terraform's HCL (HashiCorp Configuration Language). This simplicity reduces the learning curve and makes it easier for teams to write and maintain infrastructure code. Incremental Deployment: Unlike Terraform, Bicep does not store state. Instead, it relies on incremental deployment, which simplifies the deployment process and reduces the complexity associated with state management. This approach ensures that resources are deployed consistently without the need for managing state files. Azure Policy Integration: Bicep integrates seamlessly with Azure Policy, allowing for preflight validation to ensure compliance with policies before deployment. This integration helps in maintaining governance and compliance across deployments What-If Analysis: Bicep offers a "what-if" operation that predicts the changes before deploying a Bicep file. This feature allows customers to preview the impact of their changes without making any modifications to the existing infrastructure. 🏁 Conclusion Creating an Application Landing Zone using Bicep provides a robust, scalable, and secure foundation for deploying applications in Azure. By following a phased, modular approach and leveraging automation, organizations can accelerate their cloud adoption journey while maintaining governance and operational excellence.1.4KViews1like0CommentsMicrosoft Fabric: Automate Artifact Deployment with Azure DevOps and Python
Microsoft Fabric is rapidly becoming the go-to platform for enterprise-grade analytics and reporting. However, deploying artifacts like dataflows, datasets, and reports across environments (Dev → Test → Prod) can be a manual and error-prone process. This blog walks you through a fully automated and secure CI/CD solution that uses Azure DevOps Pipelines, Python scripting, and Microsoft Fabric REST APIs to streamline artifact deployment across Fabric workspaces. Whether you’re a DevOps engineer or Fabric administrator, this setup brings speed, security, and consistency to your deployment pipeline. ✅ The Challenge Microsoft Fabric currently provides deployment pipeline console for promotion of artifacts. Manual promotion across environments introduces risks like: Misconfiguration or broken dependencies Lack of traceability or versioning Security and audit concerns with manual artifact movement 🔧 The Solution — Python + Azure DevOps + Fabric API This solution uses a tokenized YAML pipeline combined with a custom Python script to promote Fabric artifacts between environments using Fabric Deployment Pipelines and REST APIs. 🔑 Key Advantages & How This Helps You ✅ Zero-Touch Deployment – Automates Dev → Test artifact promotion using Fabric Deployment APIs ✅ Repeatable & Consistent – YAML pipelines enforce consistent promotion logic ✅ Secure Authentication – OAuth2 ROPC flow with service account credentials ✅ Deployment Visibility – Logs tracked via DevOps and Fabric API responses ✅ Low Overhead – Just a lightweight Python script—no external tools needed 🧩 Core Features 1️⃣ Fabric Deployment Pipeline Integration – Automates artifact promotion across Dev, Test, and Prod stages 2️⃣ Environment-Aware Deployment – Supports variable groups and environment-specific parameters 3️⃣ Flexible API Control – Granular stage control with REST API interactions 4️⃣ Real-Time Status Logging – Pipeline polls deployment status from Fabric 5️⃣ Modular YAML Architecture – Easy to plug into any existing DevOps pipeline 6️⃣ Secure Secrets Management – Credentials and sensitive info managed via DevOps variable groups ⚙️ How It Works Define a Fabric Deployment Pipeline in your source workspace (Dev). Configure Azure DevOps pipeline with YAML and use Python to trigger the Fabric deployment stage. Promote artifacts (notebooks, data pipelines, semantic models, reports, lakehouses, etc.) between environments using Fabric REST APIs. Monitor deployment in DevOps logs and optionally via Fabric’s deploymentPipelineRuns endpoint. 📌 Sample: Python API Trigger Logic Step-by-Step Setup 1. Create a Deployment Pipeline in Fabric Go to Microsoft Fabric Portal . Navigate to Deployment Pipelines. Click Create pipeline, and provide a name. The pipeline will have default Development, Test, and Production stages. 2. Assign Workspaces to Stages For each stage (Dev, Test), click Assign Workspace. Choose the appropriate Fabric workspace. Click Save and Apply. 3. Copy the Deployment Pipeline ID Open the created pipeline. In the browser URL, copy the ID: Store this ID in a DevOps variable group. 4. Update Placeholder Values Replace all placeholder values like: tenant_id username client_id deployment_pipeline_id Use variable groups for security. 5. Create Variable Groups in Azure DevOps a. Create a Group: fabric-secrets Store secrets like: tenant_id client_id service-acc-username service-acc-key b. Create a Group: fabric-ids Store: deployment_pipeline_id dev_stage_id test_stage_id 6. Get Stage IDs via API Use this API to get stage IDs: Or use a helper script to extract dev_stage_id and test_stage_id, then update them in your fabric-ids variable group. 7. Azure DevOps YAML Pipeline Save the below in a .yml file in your repo: trigger: none variables: - group: fabric-secrets - group: fabric-ids stages: - stage: Generate_And_Deploy_Artifacts displayName: 'Generate and Deploy Artifacts' jobs: - job: Generate_And_Deploy_Artifacts_Job displayName: 'Generate and Deploy Artifacts' steps: - publish: $(System.DefaultWorkingDirectory) artifact: fabric-artifacts-$(System.StageName) displayName: 'Publish Configuration Files' - script: | echo "🚀 Running test.py..." export tenant_id=$(tenant_id) export username=$(service-acc-username) export password=$(service-acc-key) export client_id=$(client_id) export deployment_pipeline_id=$(deployment_pipeline_id) export dev_stage_id=$(dev_stage_id) export test_stage_id=$(test_stage_id) python fabric-artifacts-deploy/artifacts/test.py displayName: 'Run Deployment Script (Dev → Test)' ''' 8. Python Script - test.py import json import os import time tenant_id = os.getenv('tenant_id') client_id = os.getenv('client_id') username = os.getenv('username') password = os.getenv('password') deployment_pipeline_id = os.getenv('deployment_pipeline_id') dev_stage_id = os.getenv('dev_stage_id') test_stage_id = os.getenv('test_stage_id') deploy_url = f'https://api.fabric.microsoft.com/v1/deploymentPipelines/{deployment_pipeline_id}/deploy' token_url = f'https://login.microsoftonline.com/{tenant_id}/oauth2/v2.0/token' def get_access_token(): token_data = { 'grant_type': 'password', 'client_id': client_id, 'scope': 'https://api.fabric.microsoft.com/.default offline_access', 'username': username, 'password': password, } response = requests.post(token_url, data=token_data) if response.status_code == 200: return response.json().get('access_token') else: print("❌ Failed to authenticate") print(response.text) exit(1) def poll_operation_status(location_url, headers): while True: response = requests.get(location_url, headers=headers) if response.status_code in [200, 201]: status = response.json().get("status", "Unknown") print(f"⏳ Status: {status}") if status == "Succeeded": print("✅ Deployment successful!") break elif status == "Failed": print("❌ Deployment failed!") print(response.text) exit(1) time.sleep(5) elif response.status_code == 202: retry_after = int(response.headers.get("Retry-After", 5)) print(f"Waiting {retry_after} seconds...") time.sleep(retry_after) else: print("❌ Unexpected response") print(response.text) exit(1) def deploy(source_stage_id, target_stage_id, note, token): payload = { "sourceStageId": source_stage_id, "targetStageId": target_stage_id, "note": note } headers = { 'Authorization': f'Bearer {token}', 'Content-Type': 'application/json' } response = requests.post(deploy_url, headers=headers, data=json.dumps(payload)) if response.status_code in [200, 201]: print("✅ Deployment completed") elif response.status_code == 202: location_url = response.headers.get('location') if location_url: poll_operation_status(location_url, headers) else: print("❌ Location header missing in 202 response") else: print("❌ Deployment failed") print(response.text) if __name__ == "__main__": token = get_access_token() print("✅ Token acquired") deploy(dev_stage_id, test_stage_id, "Deploy Dev → Test", token) ✅ Result After running the DevOps pipeline: Artifacts from Dev workspace are deployed to Test workspace. Logs are visible in the pipeline run. 💡 Why Python? Python acts as the glue between Azure DevOps and Microsoft Fabric: Retrieves authentication tokens Triggers Fabric pipeline stages Parses deployment responses Easy to integrate with YAML via script tasks This approach keeps your CI/CD stack clean, lightweight, and fully automatable. 🚀 Get Started Today Use this solution to: Accelerate delivery across environments Eliminate manual promotion risk Improve deployment visibility Enable DevOps best practices within Microsoft Fabric1.1KViews1like0CommentsStrengthening Azure infrastructure and platform security - 5 new updates
In the face of AI-driven digital growth and a threat landscape that never sleeps, Azure continues to raise the bar on Zero Trust-ready, “secure-by-default” networking. Today we’re excited to announce five innovations that make it even easier to protect your cloud workloads while keeping developers productive: Innovation What it is Why it matters Next generation of Azure Intel® TDX Confidential VMs (Private Preview) Azure’s next generation of Confidential Virtual Machines now powered by the 5th Gen Intel® Xeon® processors (code-named Emerald Rapids) with Intel® Trust Domain Extensions (Intel® TDX). Enables organizations to bring confidential workloads to the cloud without code changes to applications. The supported VMs include the general-purpose families DCesv6-series and the memory optimized families ECesv6-series. CAPTCHA support for Azure WAF (Public Preview) A new WAF action that presents a visual / audio CAPTCHA when traffic matches custom or Bot Manager rules. Stops sophisticated, human-mimicking bots while letting legitimate users through with minimal friction. Microsoft Learn Azure Bastion Developer (New Regions, simplified secure-by-default UX) A free, lightweight Bastion offering surfaced directly in the VM Connect blade. One-click, private RDP/SSH to a single VM—no subnet planning, no public IP. Gives dev/test teams instant, hardened access without extra cost, jump servers, or NSGs. Azure Azure Virtual Network TAP (Public Preview) Native agentless packet mirroring available for all VM SKUs with zero impact to VM performance and network throughput. Deep visibility for threat-hunting, performance, and compliance—now cloud-native. Microsoft Learn Azure Firewall integration in Security Copilot (GA) A generative AI-powered solution that helps secure networks with the speed and scale of AI. Threat hunt across Firewalls using natural language questions instead of manually scouring through logs and threat databases. Microsoft Learn 1. Next generation of Azure Intel® TDX Confidential VMs (Private Preview) We are excited to announce the preview of Azure’s next generation of Confidential Virtual Machines powered by the 5th Gen Intel® Xeon® processors (code-named Emerald Rapids) with Intel® Trust Domain Extensions (Intel® TDX). This will help to enable organizations to bring confidential workloads to the cloud without code changes to applications. The supported VMs include the general-purpose families DCesv6-series and the memory optimized families ECesv6-series. Azure’s next generation of confidential VMs will bring improvements and new features compared to our previous generation. These VMs are our first offering to utilize our open-source paravisor, OpenHCL. This innovation allows us to enhance transparency with our customers, reinforcing our commitment to the "trust but verify" model. Additionally, our new confidential VMs support Azure Boost, enabling up to 205k IOPS and 4 GB/s throughput of remote storage along with 54 GBps VM network bandwidth. We are expanding the capabilities of our Intel® TDX powered confidential VMs by incorporating features from our general purpose and other confidential VMs. These enhancements include Guest Attestation support, and support of Intel® Tiber™ Trust Authority for enterprises seeking operator independent attestation. The DCesv6-series and ECesv6-series preview is available now in the East US, West US, West US 3, and West Europe regions. Supported OS images include Windows Server 2025, Windows Server 2022, Ubuntu 22.04, and Ubuntu 24.04. Please sign up at aka.ms/acc/v6preview and we will reach out to you. 2. Smarter Bot Defense with WAF + CAPTCHA Modern web applications face an ever-growing array of automated threats, including bots, web scrapers, and brute-force attacks. Many of these attacks evade common security measures such as IP blocking, geo-restrictions, and rate limiting, which struggle to differentiate between legitimate users and automated traffic. As cyber threats become more sophisticated, businesses require stronger, more adaptive security solutions. Azure Front Door’s Web Application Firewall (WAF) now introduces CAPTCHA in public preview—an interactive mechanism designed to verify human users and block malicious automated traffic in real time. By requiring suspicious traffic to successfully complete a CAPTCHA challenge, WAF ensures that only legitimate users can access applications while keeping bots at bay. This capability is particularly valuable for common login and sign-up workflows, mitigating the risk of account takeovers, credential stuffing attacks, and brute-force intrusions that threaten sensitive user data. Key Benefits of CAPTCHA on Azure Front Door WAF Prevent Automated Attacks – Blocks bots from accessing login pages, forms, and other critical website elements. Secure User Accounts – Mitigates credential stuffing and brute-force attempts to protect sensitive user information. Reduce Spam & Fraud – Ensures only real users can submit comments, register accounts, or complete transactions. Easy Deployment and Management – Requires minimal configuration, reducing operational overhead while maintaining a robust security posture. How CAPTCHA Works When a client request matches a WAF rule configured for CAPTCHA enforcement, the user is presented with an interactive CAPTCHA challenge to confirm they are human. Upon successful completion, Azure WAF validates the request and allows access to the application. Requests that fail the challenge are blocked, preventing bots from proceeding further. Getting Started CAPTCHA is now available in public preview for Azure WAF. Administrators can configure this feature within their WAF policy settings to strengthen bot mitigation strategies and improve security posture effortlessly. To learn more and start protecting your applications today, visit our Azure WAF documentation. 3. Azure Bastion Developer—Secure VM Access at Zero Cost Azure Bastion Developer is a lightweight, free offering of the Azure Bastion service designed for Dev/Test users who need secure connections to their Virtual Machines (VMs) without requiring additional features or scalability. It simplifies secure access to VMs, addressing common issues related to usability and cost. To get started, users can sign in to the Azure portal and follow the setup instructions for connecting to their VMs. This service is particularly beneficial for developers looking for a cost-effective solution for secure connectivity. It's now available in 36 regions with a new portal secure by default user experience. Key takeaways Instant enablement from the VM Connect tab. One concurrent session, ideal for dev/test and PoC environments. No public IPs, agents, or client software required. 4. Deep Packet Visibility with Virtual Network TAP Azure virtual network terminal access point enables customers to mirror virtual machine traffic to packet collectors or analytics tools without having to deploy agents or impact virtual machine network throughput, allowing you to mirror 100% of your production traffic. By configuring virtual network TAP on a virtual machine’s network interface, organizations can stream inbound and outbound traffic to destinations within the same or peered virtual network for real-time monitoring for various uses cases, including: Enhanced security and threat detection: Security teams can inspect full packet data in real-time to detect and respond to potential threats. Performance monitoring and troubleshooting: Operations teams can analyze live traffic patterns to identify bottlenecks, troubleshoot latency issues, and optimize application performance. Regulatory compliance: Organizations subject to compliance frameworks such as Health Insurance Portability and Accountability Act (HIPAA), and General Data Protection Regulation (GDPR) can use virtual network TAP to capture network activity for auditing and forensic investigations. Virtual network TAP supports all Azure VM SKU and integrates seamlessly with validated partner solutions, offering extended visibility and security capabilities. For a list of partner solutions that are validated to work with virtual network TAP, see partner solutions. 5. Protect networks at machine speed with Generative AI Azure Firewall intercepts and blocks malicious traffic using the intrusion detection and prevention system (IDPS) today. It processes huge volumes of packets, analyzes signals from numerous network resources, and generates vast amounts of logs. To reason over all this data and cut through the noise to analyze threats, analysts spend several hours if not days performing manual tasks. The Azure Firewall integration in Security Copilot helps analysts perform these investigations with the speed and scale of AI. An example of a security analyst processing the threats their Firewall stopped can be seen below: Analysts spend hours writing custom queries or navigating several manual steps to retrieve threat information and gather additional contextual information such as geographical location of IPs, threat rating of a fully qualified domain name (FQDN), details of common vulnerabilities and exposures (CVEs) associated with an IDPS signature, and more. Copilot pulls information from the relevant sources to enrich your threat data in a fraction of the time and can do this not just for a single threat/Firewall but for all threats across your entire Firewall fleet. It can also correlate information with other security products to understand how attackers are targeting your entire infrastructure. To learn more about the user journey and value that Copilot can deliver, see the Azure blog from our preview announcement at RSA last year. To see these capabilities in action, take a look at this Tech Community blog, and to get started, see the documentation. Looking Forward Azure is committed to delivering secure, reliable, and high-performance connectivity so you can focus on building what’s next. Our team is dedicated to creating innovative, resilient, and secure solutions that empower businesses to leverage AI and the cloud to their fullest potential. Our approach of providing layered defense in depth via our security solutions like Confidential Compute, Azure DDoS Protection, Azure Firewall, Azure WAF, Azure virtual network TAP, network security perimeter will continue with more enhancements and features upcoming. We can’t wait to see how you’ll use these new security capabilities and will be keen to hear your feedback.874Views0likes0CommentsMonitoring Time Drift in Azure Kubernetes Service for Regulated Industries
In this blog post, I will share how customers can monitor their Azure Kubernetes Service (AKS) clusters for time drifts using a custom container image, Azure managed Prometheus and Grafana. Understanding Time Sync in Cloud Environments Azure’s underlying infrastructure uses Microsoft-managed Stratum 1 time servers connected to GPS-based atomic clocks to ensure a highly accurate reference time. Linux VMs in Azure can synchronize either with their Azure host via Precision Time Protocol (PTP) devices like /dev/ptp0, or with external NTP servers over the public internet. The Azure host, being physically closer and more stable, provides a lower-latency and more reliable time source. On Azure, Linux VMs use chrony, a Linux time synchronization service. It provides superior performance under varying network conditions and includes advanced capabilities for handling drift and jitter. Terminology like "Last offset" (difference between system and reference time), "Skew" (drift rate), and "Root dispersion" (uncertainty of the time measurement) help quantify how well a system's clock is aligned. Solution Overview At the time of writing this article, it is not possible to monitor clock errors on Azure Kubernetes Service nodes directly, since node images can not be customized and are managed by Azure. Customers may ask "How do we prove our AKS workloads are keeping time accurately?" To address this, I've developed a solution that consists of a custom container image running as a DaemonSet, which generates Prometheus metrics and can be visualized on Grafana dashboards, to continuously monitor time drift across Kubernetes nodes. This solution deploys a containerized Prometheus exporter to every node in the Azure Kubernetes Service (AKS) cluster. It exposes a metric representing the node's time drift, allowing Prometheus to scrape the data and Azure Managed Grafana to visualize it. The design emphasizes security and simplicity: the container runs as a non-root user with minimal privileges, and it securely accesses the Chrony socket on the host to extract time synchronization metrics. As we walk through the solution, it is recommended that you follow along with code on GitHub. Technical Deep Dive: From Image Build to Pod Execution The custom container image is built around a Python script (chrony_exporter.py) that runs the chronyc tracking command, parses its output, and calculates a 'clock error' value. This value is calculated in the following way: clock_error = |last_offset| + root_dispersion + (0.5 × root_delay) This script then exports the result via a Prometheus-compatible HTTP endpoint. The only dependency it requires is the prometheus_client library, defined in the requirements.txt file Secure Entrypoint with Limited Root Access The container is designed to run as a non-root user. The entrypoint.sh script launches the Python exporter using sudo, which is the only command that this user is allowed to run with elevated privileges. This ensures that while root is required to query chronyc, the rest of the container operates with a strict least-privilege model: #!/bin/bash echo "Executing as non-root user: $(whoami)" sudo /app/venv/bin/python /app/chrony_exporter.py By restricting the sudoers file to a single command, this approach allows safe execution of privileged operations without exposing the container to unnecessary risk. DaemonSet with Pod Hardening and Host Socket Access The deployment is defined as a Kubernetes DaemonSet (chrony-ds.yaml), ensuring one pod runs on each AKS node. The pod has the following hardening and configuration settings: Runs as non-root (runAsUser: 1001, runAsNonRoot: true) Read-only root filesystem to minimize tampering risk and altering of scripts HostPath volume mount for /run/chrony so it can query the Chrony daemon on the node Prometheus annotations for automated metric scraping Example DaemonSet snippet: securityContext: runAsUser: 1001 runAsGroup: 1001 runAsNonRoot: true containers: - name: chrony-monitor image: <chrony-image> command: ["/bin/sh", "-c", "/app/entrypoint.sh"] securityContext: readOnlyRootFilesystem: true volumeMounts: - name: chrony-socket mountPath: /run/chrony volumes: - name: chrony-socket hostPath: path: /run/chrony type: Directory This setup gives the container controlled access to the Chrony Unix socket on the host while preventing any broader filesystem access. Configuration: Using the Azure Host as a Time Source The underlying AKS node's (Linux VM) chrony.conf file is configured to sync time from the Azure host through the PTP device (/dev/ptp0). This configuration is optimized for cloud environments and includes: refclock PHC /dev/ptp0 for direct PTP sync makestep 1.0 -1 to immediately correct large drifts on startup This ensures that time metrics reflect highly accurate local synchronization, avoiding public NTP network variability. With these layers combined—secure container build, restricted execution model, and Kubernetes-native deployment—you gain a powerful yet minimalistic time accuracy monitoring solution tailored for financial and regulated environments. Setup Instructions Prerequisites An existing AKS cluster Azure Monitor with Managed Prometheus and Grafana enabled An Azure Container Registry (ACR) to host your image Steps Clone the project repository: git clone https://github.com/Azure/chrony-tracker.git Build the Docker image locally: docker build --platform=linux/amd64 -t chrony-tracker:1.0 . Tag the image for your ACR: docker tag chrony-tracker:1.0 <youracr>.azurecr.io/chrony-tracker:1.0 Push the image to ACR: docker push <youracr>.azurecr.io/chrony-tracker:1.0 Update the DaemonSet YAML (chrony-ds.yaml) to use your ACR image: image: <youracr>.azurecr.io/chrony-tracker:1.0 Apply the DaemonSet: kubectl apply -f chrony-ds.yaml Apply the Prometheus scrape config (ConfigMap): kubectl apply -f ama-metrics-prometheus-config-configmap.yaml Delete the "ama-metrics-xxx" pods from the kube-system namespace to apply the new configurations After these steps, your AKS nodes will be monitored for clock drift. Viewing the Metric in Managed Grafana Once the DaemonSet and ConfigMap are deployed and metrics are being scraped by Managed Prometheus, you can visualize the chrony_clock_error_ms metric in Azure Managed Grafana by following these steps: Open the Azure Portal and navigate to your Azure Managed Grafana resource. Select the Grafana workspace and navigate to the Endpoint by clicking on the URL under Overview From the left-hand menu, select Metrics and then click on + New metric exploration Enter the name of the metric "chrony_clock_error_ms" under Search metrics and click Select You should now be able to view the metric To customize it and view all sources, click on the Open in explorer button Optional: Secure the Metrics Endpoint To enhance the security of the /metrics endpoint exposed by each pod, you can enable basic authentication on the exporter. This requires configuring an HTTP server inside the container with basic authentication. You would also need to update your Prometheus ConfigMap to include authentication credentials . For detailed guidance on securing scrape targets, refer to the Prometheus documentation on authentication and TLS settings. In addition it is recommended to use Private link for Kubernetes monitoring with Azure Monitor and Azure managed Prometheus Learn More If you'd like to explore this solution further or integrate it into your production workloads, the following resources provide valuable guidance: Microsoft Learn: Time sync in Linux VMs chroncy-tracker GitHub repo Azure Monitor and Prometheus Integration Author Dotan Paz Sr. Cloud Solutions Architect, Microsoft783Views0likes0Comments