azure resource manager
36 TopicsDefender for Cloud Inventory API Coverage — No Official Way to Retrieve Per-Resource Coverage?
I'm reaching out to the Microsoft Defender for Cloud team and the broader community because I've run into a gap that I believe others may face too — and I’m hoping for guidance or clarification. I need to programmatically retrieve a list of resources from a subscription and determine if each resource is covered by a Defender for Cloud plan. This would replicate what we see in the Azure Portal under: Microsoft Defender for Cloud > Inventory: The goal is to fetch this data via API and replicate that table — but the problem is that it seems there’s no way to retrieve the “Defender for Cloud” coverage status per resource. Here’s what I’ve tried so far: The /pricings endpoint — returns plan tiers like Free or Standard, but only for the overall subscription or service type, not individual resources. Azure Resource Graph — the properties field does not contain any Defender-related indicators that would confirm whether a specific resource is covered. My Question Does an API exist today to retrieve per-resource Defender for Cloud coverage? Is there a /coverage endpoint or equivalent that is officially supported? If anyone from the Defender for Cloud or Azure product teams can point me in the right direction, I’d truly appreciate it. Thank you!35Views0likes1CommentFSI Knowledge Mining and Intelligent Document Process Reference Architecture
FSI customers such as insurance companies and banks rely on their vast amounts of data to provide sometimes hundreds of individual products to their customers. From assessing product suitability, underwriting, fraud investigations, and claims handling, many employees and applications depend on accessing this data to do their jobs efficiently. Since the capabilities of GenAI have been realised, we have been helping our customers in this market transform their business with unified systems that simplify access to this data and speed up the processing times of these core tasks, while remaining compliant with the numerous regulations that govern the FSI space. Combining the use of Knowledge Mining with Intelligent Document processing provides a powerful solution to reduce the manual effort and inefficacies of ensuring data integrity and retrieval across the many use cases that most of our customers face daily. What is Knowledge Mining and Intelligent Document Processing? Knowledge Mining is a process that transforms large, unstructured data sets into searchable knowledge stores. Traditional search methods often rely on keyword matching, which can miss the context of the information. In contrast, knowledge mining uses advanced techniques like natural language processing (NLP) to understand the context and meaning behind the data, providing a robust searching mechanism that can look across all these data sources, understand the relationships between the data therefore providing more accurate and relevant results. Intelligent Document Processing (IDP) is a workflow automation technology designed to scan, read, extract, categorise, and organise meaningful information from large streams of data. Its primary function is to extract valuable information from extensive data sets without human input, thereby increasing processing speed and accuracy while reducing costs. By leveraging a combination of Artificial Intelligence (AI), Machine Learning (ML), Optical Character Recognition (OCR), and Natural Language Processing (NLP), IDP handles both structured and unstructured documents. By ensuring that the processed data meets the "gold standard" - structured, complete, and compliant - IDP helps organizations maintain high-quality, reliable, and actionable data. The Power of Knowledge Mining and Intelligent Document Processing as a Unified Solution Knowledge Mining excels at quickly responding to natural language queries, providing valuable insights and making previously unsearchable data accessible. At the same time, IDP ensures that the processed data meets the "gold standard"—structured, complete, and compliant—making it both reliable and actionable. Together, these technologies empower organisations to harness the full potential of their data, driving better decision-making and improved efficiency. __________________________________________________________________ Meet Alex: A Day in the Life of a Fraud Case Worker Responsibilities: Investigate potential fraud cases by manually searching across multiple systems. Read and analyse large volumes of information to filter out relevant data. Ensure compliance with regulatory requirements and maintain data accuracy. Prepare detailed reports on findings and recommendations. Lost in Data: The Struggles of Manual Fraud Investigation Alex receives a new fraud case and starts by manually searching through multiple systems to gather information. This process takes several hours, and Alex has to read through numerous documents and emails to filter out relevant data. The inconsistent data formats and locations make it challenging to ensure accuracy. By the end of the day, Alex is exhausted and has only made limited progress on the case. Effortless Efficiency: Fraud Investigation Transformed with Knowledge Mining and IDP Alex receives a new fraud case and needs to gather all relevant information quickly. Instead of manually searching through multiple systems, Alex inputs the following natural language query into the unified system: "Show me all documents, emails, and notes related to the recent transactions of client X that might indicate fraudulent activity." The system quickly retrieves and presents a comprehensive summary of all relevant documents, emails, and notes, ensuring that the data is structured, complete, and compliant. This allows Alex to focus on analysing the data and making informed decisions, significantly improving the efficiency and accuracy of the investigation. How has Knowledge Mining and IDP transformed Alex's role? Before implementing Knowledge Mining and Intelligent Document Processing, Alex faced a manual process of searching across multiple systems to gather information. This was time-consuming and labour-intensive, often leading to delays in investigations. The overwhelming volume of data from various sources made it difficult to filter out relevant information, and the inconsistent data formats and locations increased the risk of errors. This high workload not only reduced Alex's efficiency but also led to burnout and decreased job satisfaction. However, with the introduction of a unified system powered by Knowledge Mining and IDP, these challenges were significantly mitigated. Automated searches using natural language queries allowed Alex to quickly find relevant information, while IDP ensured that the data processed was structured, complete, and compliant. This unified system provided a comprehensive view of the data, enabling Alex to make more informed decisions and focus on higher-value tasks, ultimately improving productivity and job satisfaction. ____________________________________________________________________ Example Architecture Knowledge Mining Users can interact with the system through a portal on the customer’s front-end of choice. This will serve as the entry point for submitting queries and accessing the knowledge mining service. Front-end options could include web apps, container services or serverless integrations. Azure AI Search provides powerful RAG capabilities. Meanwhile, Azure Open AI provides access to large language models to summarise responses. These services combined will take the user’s query to search the knowledge base and return relevant information which can be augmented as required. Prompt engineering can provide customisation to how the data is returned. You define what the data sources your Azure AI Search will consume. This can be Azure storage services or other data repositories. Data that meets a pre-defined gold standard is queried by Azure AI Search and relevant data is returned to the user. Gold standard data could be based on compliance or business needs. Power BI can be used to create analytical reports based on the data retrieved and processed. This step involves visualising the data in an interactive and user-friendly manner, allowing users to gain insights and make data-driven decisions. Intelligent Document Processing (Optional) Azure Data Factory is a data integration service that allows you to create workflows for data movement and transforming data at scale. This business data can be easily ingested to your Azure data storage solutions using pre-built connectors. This event driven approach ensures that as new data is generated, it can automatically be processed and ready for use in your knowledge mining solution. Data can be transformed using Functions apps and Azure OpenAI. Through prompt engineering, the large language model (LLM) can highlight specific issues in the documents, such as grammatical errors, irrelevant content, or incomplete information. The LLM can then be used to rewrite text to improve clarity and accuracy, add missing information, or reformat content to adhere to guidelines. Transformed data is stored as gold standard data. ____________________________________________________________________ Additional Cloud Considerations Networking VNETs (Virtual Networks) are a fundamental component of cloud infrastructure that enable secure and isolated networking configurations within a cloud environment. They allow different resources, such as virtual machines, databases, and services, to communicate with each other securely. Virtual networks ensure that services such as Azure AI Search, Azure OpenAI, and Power BI, can securely communicate with each other. This is crucial for maintaining the integrity and confidentiality of sensitive financial data. Express Route or VPN are expected to be used when connecting on-premises infrastructure to Azure for several reasons. Your company Azure ExpressRoute provides a private, reliable, and high-speed connection between your data center and Microsoft Azure. It allows you to extend your infrastructure to Azure by providing private access to resources deployed in Azure Virtual Networks and public services like App service, private end points to various other services. This private peering ensures that your traffic never enters the public Internet, enhancing security and performance. ExpressRoute uses Border Gateway Protocol (BGP) for dynamic routing between your on-premises networks and Azure, ensuring efficient and secure data exchange. It also offers built-in redundancy and high availability, making it a robust solution for critical workloads. Azure Front Door is a cloud-based Content Delivery Network (CDN) and application delivery service provided by Microsoft. It offers several key features, including global load balancing, dynamic site acceleration, SSL offloading, and a web application firewall, making it an ideal solution for optimizing and protecting web applications. We are expecting to use Front door in scenarios when the architecture will be expected to be used by users outside the organisation. Azure API Management in this scenario is expected to be used when we look to rollout the solution to larger groups. We look to then integrate much more security, rate limiting, load balancing, etc. Monitoring and Governance Azure Monitor: This service collects and analyses telemetry data from various resources, providing insights into the performance and health of the system. It enables proactive identification and resolution of issues, ensuring the system runs smoothly. Azure Cost Management and Billing: Provides tools for monitoring and controlling costs associated with the solution. It offers insights into spending patterns and resource usage, enabling efficient financial governance. Application Insights: Provides application performance monitoring (APM) designed to help you understand how your applications are performing and to identify issues that may affect their performance and reliability These components together ensure that the Knowledge Mining and Intelligent Document Processing solution is monitored for performance, secured against threats, compliant with regulations, and managed efficiently from a cost perspective. ____________________________________________________________________ Next steps: Identify the data and its sources that will feed into your own Knowledge Mine. Consider if you also need to implement Intelligent Document Processing to ensure data quality. Define your 'gold standards'. These guidelines will determine how your data might be transformed. Consider how to provide access to the data through an application portal, choose the right front-end technology for your use case. Once you have configured Azure AI search to point to the chosen data, consider how you might augment responses using Azure AI LLM models. Useful resources AI Landing Zone reference architecture Azure and Open AI with API Manager Secure connectivity from on premesis to Azure hosted solutions245Views1like0CommentsRunning PowerShell Scripts on Azure VMs with Domain User Authentication using Azure Functions
Azure Functions provides a powerful platform for automating various tasks within your Azure environment. In this specific scenario, we’ll use Azure Functions to run commands remotely on a VM while authenticating as a domain user, not as a system-assigned or user-assigned managed identity. Prerequisites Before we dive into the implementation, ensure you have the following prerequisites: Azure Subscription: You'll need an Azure subscription. Azure VM: A Windows VM is required where the PowerShell script will be executed. Azure Function: You will need an Azure Function with the appropriate configurations to run the PowerShell script. Domain User Account: A domain user account for authentication. Azure PowerShell Modules: The Azure PowerShell modules (Az) installed to work with Azure resources. Workflow Diagram: Based on demo script example Step 1: Setting Up Azure Function with Managed Identity We'll start by setting up an Azure Function that will handle the execution of commands on a remote Azure VM. 1.1. Create a New Azure Function Go to the Azure Portal and navigate to Azure Functions. Create a new Function App. This will be your container for the function that will execute the commands. Set up the Function App with appropriate configurations such as region, runtime stack (select PowerShell), and storage account. 1.2. Configure Managed Identity To authenticate the Azure Function to Azure resources like the VM, we’ll use Managed Identity: Go to the Identity section of your Function App and enable System Assigned Managed Identity. Ensure the Function App has the necessary permissions to interact with the VM and other resources. Go to the Azure VM and assign the Managed Identity with the required RBAC roles (e.g., Virtual Machine Contributor). Step 2: Script for Running PowerShell Commands on Azure VM Now let’s focus on writing the script that will execute the PowerShell commands remotely on the Azure VM using domain authentication. 2.1. Set Up PowerShell Script for Remote Execution The PowerShell script to be executed should be capable of running commands like user creation, creating test files and folders, etc. We can use complex script to execute on Azure Windows VM automation tasks except few operations which requires special configuration (drive map, Azure File Share, SMB share). Below is the script for the task: #Script to create a folder on Windows VM using Azure Functions with Domain User Authentication. Script name testing.ps1 # Log file path $logFilePath = "C:\Users\Public\script.log" # Function to log messages to the log file function Write-Log { param([string]$message) $timestamp = (Get-Date).ToString("yyyy-MM-dd HH:mm:ss") $logMessage = "$timestamp - $message" Add-Content -Path $logFilePath -Value "$logMessage`r`n" } Write-Log "Script execution started." # Authenticate with domain user credentials (you can remove this part if not needed for local script) $domainUser = "dctest01\test" #replace with your domain user. Use Azure Key Vault for production. $domainPassword = "xxxxxxx" #replace with your domain user password. Use Azure Key Vault for production. $securePassword = ConvertTo-SecureString $domainPassword -AsPlainText -Force $credentials = New-Object System.Management.Automation.PSCredential($domainUser, $securePassword) Write-Log "Executing script as user: $domainUser" # Define the folder path to create $folderPath = "C:\Demo" # Check if the folder already exists if (Test-Path -Path $folderPath) { Write-Log "Folder 'Demo' already exists at $folderPath." } else { # Create the folder if it doesn't exist New-Item -Path $folderPath -ItemType Directory Write-Log "Folder 'Demo' created at $folderPath." } Write-Log "Test Ended." Step 3: Configure Azure Function to Run PowerShell Commands on VM In the Azure Function, we will use the Set-AzVMRunCommand or Invoke-AzVMRunCommand cmdlets to execute this script on the Azure VM. For production use Azure Key Vault to store sensitive information like subscription Id, username, password. # Input bindings are passed in via param block. param($Request, $TriggerMetadata) # Log file path $logFilePath = "./Invoke-AzVMRunCommand/LogFiles/Scriptlog.log" #Custom logfile to generate logfile of Azure Function execution. This is optional as Function generates logs. # Function to log messages to the log file function Write-Log { param([string]$message) $timestamp = (Get-Date).ToString("yyyy-MM-dd HH:mm:ss") $logMessage = "$timestamp - $message" # Overwrite log file each time (no history kept) $logMessage | Out-File -FilePath $logFilePath -Append -Force } # Start logging all output into the log file Write-Log "Script execution started." # Define the Subscription Id directly $SubscriptionId = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" # Set your Subscription ID here # Authenticate using System Assigned Managed Identity $AzureContext = Connect-AzAccount -Identity # Set the subscription context explicitly $AzureContext = Set-AzContext -SubscriptionId $SubscriptionId -DefaultProfile $AzureContext # Check and output the current Azure context $azContext = Get-AzContext Write-Log "First Identity CONTEXT AFTER:" Write-Log ($azContext | Format-Table | Out-String) # Define VM details and script $un = 'dctest01\test' # Domain user executing the script. Replace with your domain user. $pw = 'xxxxxxxxxxxxx' # Password for the domain user #Replace with domain user password. For production use Azure Key Vault $vmname = 'workernode01-win' #Replace with your Azure Windows VM # Local Script on Windows VM $scriptPath = "./Invoke-AzVMRunCommand/testing.ps1" #testing.ps1 PowerShell script, which runs on Azure Windows VM for local commands. # Define Resource Group Name $resourceGroupName = "test-ub" #Replace with resource group on Windows VM # Log the user executing the script Write-Log "Executing the command as user: $un" # Delete all previous run commands, except the most recent one $existingRunCommands = Get-AzVMRunCommand -ResourceGroupName $resourceGroupName -VMName $vmname # If any run commands exist, remove them except for the latest one if ($existingRunCommands.Count -gt 0) { Write-Log "Removing previous run commands..." $commandsToRemove = $existingRunCommands | Sort-Object -Property CreatedTime -Descending | Select-Object -Skip 1 foreach ($command in $commandsToRemove) { Remove-AzVMRunCommand -ResourceGroupName $resourceGroupName -VMName $vmname -RunCommandName $command.Name Write-Log "Removed Run Command: $($command.Name)" } } # Generate a random string to append to the script name (e.g., RunPowerShellScript_A1B2C3) $randomString = [System.Guid]::NewGuid().ToString("N").Substring(0, 6) # Define the name for the script to run with the random string appended $scriptName = "RunPowerShellScript_" + $randomString Write-Log "Running script: $scriptName" # Set up and run the VM command using Set-AzVMRunCommand Set-AzVMRunCommand -ResourceGroupName $resourceGroupName ` -VMName $vmname ` -RunCommandName $scriptName ` -Location "East US" ` -ScriptLocalPath $scriptPath ` -RunAsUser $un ` -RunAsPassword $pw ` -Verbose ` -Debug ` -Confirm:$false # End logging Write-Log "End of execution." # Interact with query parameters or the body of the request. $name = $Request.Query.Name if (-not $name) { $name = $Request.Body.Name } $body = "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response." if ($name) { $body = "Hello, $name. This HTTP triggered function executed successfully." } # Add-Type for HttpStatusCode Add-Type -TypeDefinition @" using System.Net; "@ # Associate values to output bindings by calling 'Push-OutputBinding'. Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{ StatusCode = [System.Net.HttpStatusCode]::OK Body = $body }) This script will use Set-AzVMRunCommand to run the PowerShell script on the specified VM (workernode01-win) using domain user credentials (dctest01\test). Important Tweaks: RunCommand has limitations for maximum number of executions. To solve the maximum number of executions, we are using delete all previous run commands, except the most recent one. By deleting all previous RunCommand , it will force Azure Windows VM to install the agent again, which will take some time. Run Command execution should generate unique command name which can be performed by generating random string and append with RunCommand name. Step 4: Output and Logs The function will execute the script and return the output, including success or failure logs. The script logs the following: Start and end of the script execution. Which domain user is executing the script. Any errors or successful actions such as mapping drives or creating files. Step 5: Feature and Limitations for Run Commands Microsoft document: RunCommand Known Issues Run scripts in a Windows or Linux VM in Azure with Run Command - Azure Virtual Machines | Microsoft Learn Note The maximum number of allowed Managed Run Commands is currently limited to 25. For certain tasks related to SMB, Azure File Share or Network drive mapping does not work properly on GUI or Windows Remote Desktop Session. Step 6: Troubleshooting RunCommand RunCommand execution can be tracked on Azure Windows VM. Logfiles are generated under C:\WindowsAzure\Logs\Plugins\Microsoft.CPlat.Core.RunCommandHandlerWindows\2.0.14 (Check for latest version path based on version) RunCommandHandler.log Can be used to trace Azure Function execution and PowerShell script on Azure Windows VM. Conclusion With this implementation, you can automate the execution of PowerShell scripts on your Azure VMs using Azure Functions. The domain user authentication allows for secure access to Azure resources while also providing a log file for debugging and auditing purposes. Using Set-AzVMRunCommand and Invoke-AzVMRunCommand, we can execute the script and interact with Azure VMs remotely, all while logging relevant details for transparency and monitoring. Make sure check feature support for both commands for features and limitations.746Views0likes0CommentsUsing NVIDIA Triton Inference Server on Azure Container Apps
TOC Introduction to Triton System Architecture Architecture Focus of This Tutorial Setup Azure Resources File and Directory Structure ARM Template ARM Template From Azure Portal Testing Azure Container Apps Conclusion References 1. Introduction to Triton Triton Inference Server is an open-source, high-performance inferencing platform developed by NVIDIA to simplify and optimize AI model deployment. Designed for both cloud and edge environments, Triton enables developers to serve models from multiple deep learning frameworks, including TensorFlow, PyTorch, ONNX Runtime, TensorRT, and OpenVINO, using a single standardized interface. Its goal is to streamline AI inferencing while maximizing hardware utilization and scalability. A key feature of Triton is its support for multiple model execution modes, including dynamic batching, concurrent model execution, and multi-GPU inferencing. These capabilities allow organizations to efficiently serve AI models at scale, reducing latency and optimizing throughput. Triton also offers built-in support for HTTP/REST and gRPC endpoints, making it easy to integrate with various applications and workflows. Additionally, it provides model monitoring, logging, and GPU-accelerated inference optimization, enhancing performance across different hardware architectures. Triton is widely used in AI-powered applications such as autonomous vehicles, healthcare imaging, natural language processing, and recommendation systems. It integrates seamlessly with NVIDIA AI tools, including TensorRT for high-performance inference and DeepStream for video analytics. By providing a flexible and scalable deployment solution, Triton enables businesses and researchers to bring AI models into production with ease, ensuring efficient and reliable inferencing in real-world applications. 2. System Architecture Architecture Development Environment OS: Ubuntu Version: Ubuntu 18.04 Bionic Beaver Docker version: 26.1.3 Azure Resources Storage Account: SKU - General Purpose V2 Container Apps Environments: SKU - Consumption Container Apps: N/A Focus of This Tutorial This tutorial walks you through the following stages: Setting up Azure resources Publishing the project to Azure Testing the application Each of the mentioned aspects has numerous corresponding tools and solutions. The relevant information for this session is listed in the table below. Local OS Windows Linux Mac V How to setup Azure resources and deploy Portal (i.e., REST api) ARM Bicep Terraform V 3. Setup Azure Resources File and Directory Structure Please open a terminal and enter the following commands: git clone https://github.com/theringe/azure-appservice-ai.git cd azure-appservice-ai After completing the execution, you should see the following directory structure: File and Path Purpose triton/tools/arm-template.json The ARM template to setup all the Azure resources related to this tutorial, including a Container Apps Environments, a Container Apps, and a Storage Account with the sample dataset. ARM Template We need to create the following resources or services: Manual Creation Required Resource/Service Container Apps Environments Yes Resource Container Apps Yes Resource Storage Account Yes Resource Blob Yes Service Deployment Script Yes Resource Let’s take a look at the triton/tools/arm-template.json file. Refer to the configuration section for all the resources. Since most of the configuration values don’t require changes, I’ve placed them in the variables section of the ARM template rather than the parameters section. This helps keep the configuration simpler. However, I’d still like to briefly explain some of the more critical settings. As you can see, I’ve adopted a camelCase naming convention, which combines the [Resource Type] with [Setting Name and Hierarchy]. This makes it easier to understand where each setting will be used. The configurations in the diagram are sorted by resource name, but the following list is categorized by functionality for better clarity. Configuration Name Value Purpose storageAccountContainerName data-and-model [Purpose 1: Blob Container for Model Storage] Use this fixed name for the Blob Container. scriptPropertiesRetentionInterval P1D [Purpose 2: Script for Uploading Models to Blob Storage] No adjustments are needed. This script is designed to launch a one-time instance immediately after the Blob Container is created. It downloads sample model files and uploads them to the Blob Container. The Deployment Script resource will automatically be deleted after one day. caeNamePropertiesPublicNetworkAccess Enabled [Purpose 3: For Testing] ACA requires your local machine to perform tests; therefore, external access must be enabled. appPropertiesConfigurationIngressExternal true [Purpose 3: For Testing] Same as above. appPropertiesConfigurationIngressAllowInsecure true [Purpose 3: For Testing] Same as above. appPropertiesConfigurationIngressTargetPort 8000 [Purpose 3: For Testing] The Triton service container uses port 8000. appPropertiesTemplateContainers0Image nvcr.io/nvidia/tritonserver:22.04-py3 [Purpose 3: For Testing] The Triton service container utilizes this online resource. ARM Template From Azure Portal In addition to using az cli to invoke ARM Templates, if the JSON file is hosted on a public network URL, you can also load its configuration directly into the Azure Portal by following the method described in the article [Deploy to Azure button - Azure Resource Manager]. This is my example. Click Me After filling in all the required information, click Create. And we could have a test once the creation process is complete. 4. Testing Azure Container App In our local environment, use the following command to start a one-time Docker container. We will use NVIDIA's official test image and send a sample image from within it to the Triton service that was just deployed to Container Apps. # Replace XXX.YYY.ZZZ.azurecontainerapps.io with the actual FQDN of your app. There is no need to add https:// docker run --rm nvcr.io/nvidia/tritonserver:22.04-py3-sdk /workspace/install/bin/image_client -u XXX.YYY.ZZZ.azurecontainerapps.io -m densenet_onnx -c 3 -s INCEPTION /workspace/images/mug.jpg After sending the request, you should see the prediction results, indicating that the deployed Triton server service is functioning correctly. 5. Conclusion Beyond basic model hosting, Triton Inference Server's greatest strength lies in its ability to efficiently serve AI models at scale. It supports multiple deep learning frameworks, allowing seamless deployment of diverse models within a single infrastructure. With features like dynamic batching, multi-GPU execution, and optimized inference pipelines, Triton ensures high performance while reducing latency. While it may not replace custom-built inference solutions for highly specialized workloads, it excels as a standardized and scalable platform for deploying AI across cloud and edge environments. Its flexibility makes it ideal for applications such as real-time recommendation systems, autonomous systems, and large-scale AI-powered analytics. 6. References Quickstart — NVIDIA Triton Inference Server Deploying an ONNX Model — NVIDIA Triton Inference Server Model Repository — NVIDIA Triton Inference Server Triton Tutorials — NVIDIA Triton Inference Server476Views0likes0CommentsEnable SFTP on Azure File Share using ARM Template and upload files using WinScp
SFTP is a very widely used protocol which many organizations use today for transferring files within their organization or across organizations. Creating a VM based SFTP is costly and high-maintenance. ACI service is very inexpensive and requires very little maintenance, while data is stored in Azure Files which is a fully managed SMB service in cloud. This template demonstrates an creating a SFTP server using Azure Container Instances (ACI). The template generates two resources: storage account is the storage account used for persisting data, and contains the Azure Files share sftp-group is a container group with a mounted Azure File Share. The Azure File Share will provide persistent storage after the container is terminated. ARM Template for creation of SFTP with New Azure File Share and a new Azure Storage account Resources.json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "metadata": { "_generator": { "name": "bicep", "version": "0.4.63.48766", "templateHash": "17013458610905703770" } }, "parameters": { "storageAccountType": { "type": "string", "defaultValue": "Standard_LRS", "metadata": { "description": "Storage account type" }, "allowedValues": [ "Standard_LRS", "Standard_ZRS", "Standard_GRS" ] }, "storageAccountPrefix": { "type": "string", "defaultValue": "sftpstg", "metadata": { "description": "Prefix for new storage account" } }, "fileShareName": { "type": "string", "defaultValue": "sftpfileshare", "metadata": { "description": "Name of file share to be created" } }, "sftpUser": { "type": "string", "defaultValue": "sftp", "metadata": { "description": "Username to use for SFTP access" } }, "sftpPassword": { "type": "securestring", "metadata": { "description": "Password to use for SFTP access" } }, "location": { "type": "string", "defaultValue": "[resourceGroup().location]", "metadata": { "description": "Primary location for resources" } }, "containerGroupDNSLabel": { "type": "string", "defaultValue": "[uniqueString(resourceGroup().id, deployment().name)]", "metadata": { "description": "DNS label for container group" } } }, "functions": [], "variables": { "sftpContainerName": "sftp", "sftpContainerGroupName": "sftp-group", "sftpContainerImage": "atmoz/sftp:debian", "sftpEnvVariable": "[format('{0}:{1}:1001', parameters('sftpUser'), parameters('sftpPassword'))]", "storageAccountName": "[take(toLower(format('{0}{1}', parameters('storageAccountPrefix'), uniqueString(resourceGroup().id))), 24)]" }, "resources": [ { "type": "Microsoft.Storage/storageAccounts", "apiVersion": "2019-06-01", "name": "[variables('storageAccountName')]", "location": "[parameters('location')]", "kind": "StorageV2", "sku": { "name": "[parameters('storageAccountType')]" } }, { "type": "Microsoft.Storage/storageAccounts/fileServices/shares", "apiVersion": "2019-06-01", "name": "[toLower(format('{0}/default/{1}', variables('storageAccountName'), parameters('fileShareName')))]", "dependsOn": [ "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]" ] }, { "type": "Microsoft.ContainerInstance/containerGroups", "apiVersion": "2019-12-01", "name": "[variables('sftpContainerGroupName')]", "location": "[parameters('location')]", "properties": { "containers": [ { "name": "[variables('sftpContainerName')]", "properties": { "image": "[variables('sftpContainerImage')]", "environmentVariables": [ { "name": "SFTP_USERS", "secureValue": "[variables('sftpEnvVariable')]" } ], "resources": { "requests": { "cpu": 1, "memoryInGB": 1 } }, "ports": [ { "port": 22, "protocol": "TCP" } ], "volumeMounts": [ { "mountPath": "[format('/home/{0}/upload', parameters('sftpUser'))]", "name": "sftpvolume", "readOnly": false } ] } } ], "osType": "Linux", "ipAddress": { "type": "Public", "ports": [ { "port": 22, "protocol": "TCP" } ], "dnsNameLabel": "[parameters('containerGroupDNSLabel')]" }, "restartPolicy": "OnFailure", "volumes": [ { "name": "sftpvolume", "azureFile": { "readOnly": false, "shareName": "[parameters('fileShareName')]", "storageAccountName": "[variables('storageAccountName')]", "storageAccountKey": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2019-06-01').keys[0].value]" } } ] }, "dependsOn": [ "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]" ] } ], "outputs": { "containerDNSLabel": { "type": "string", "value": "[format('{0}.{1}.azurecontainer.io', reference(resourceId('Microsoft.ContainerInstance/containerGroups', variables('sftpContainerGroupName'))).ipAddress.dnsNameLabel, reference(resourceId('Microsoft.ContainerInstance/containerGroups', variables('sftpContainerGroupName')), '2019-12-01', 'full').location)]" } } } Parameters.json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#", "contentVersion": "1.0.0.0", "parameters": { "storageAccountType": { "value": "Standard_LRS" }, "storageAccountPrefix": { "value": "sftpstg" }, "fileShareName": { "value": "sftpfileshare" }, "sftpUser": { "value": "sftp" }, "sftpPassword": { "value": null }, "location": { "value": "[resourceGroup().location]" }, "containerGroupDNSLabel": { "value": "[uniqueString(resourceGroup().id, deployment().name)]" } } } ARM Template to Enable SFTP for an Existing Azure File Share in Azure Storage account Resources.json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "metadata": { "_generator": { "name": "bicep", "version": "0.4.63.48766", "templateHash": "16190402726175806996" } }, "parameters": { "existingStorageAccountResourceGroupName": { "type": "string", "metadata": { "description": "Resource group for existing storage account" } }, "existingStorageAccountName": { "type": "string", "metadata": { "description": "Name of existing storage account" } }, "existingFileShareName": { "type": "string", "metadata": { "description": "Name of existing file share to be mounted" } }, "sftpUser": { "type": "string", "defaultValue": "sftp", "metadata": { "description": "Username to use for SFTP access" } }, "sftpPassword": { "type": "securestring", "metadata": { "description": "Password to use for SFTP access" } }, "location": { "type": "string", "defaultValue": "[resourceGroup().location]", "metadata": { "description": "Primary location for resources" } }, "containerGroupDNSLabel": { "type": "string", "defaultValue": "[uniqueString(resourceGroup().id, deployment().name)]", "metadata": { "description": "DNS label for container group" } } }, "functions": [], "variables": { "sftpContainerName": "sftp", "sftpContainerGroupName": "sftp-group", "sftpContainerImage": "atmoz/sftp:debian", "sftpEnvVariable": "[format('{0}:{1}:1001', parameters('sftpUser'), parameters('sftpPassword'))]" }, "resources": [ { "type": "Microsoft.ContainerInstance/containerGroups", "apiVersion": "2019-12-01", "name": "[variables('sftpContainerGroupName')]", "location": "[parameters('location')]", "properties": { "containers": [ { "name": "[variables('sftpContainerName')]", "properties": { "image": "[variables('sftpContainerImage')]", "environmentVariables": [ { "name": "SFTP_USERS", "secureValue": "[variables('sftpEnvVariable')]" } ], "resources": { "requests": { "cpu": 1, "memoryInGB": 1 } }, "ports": [ { "port": 22, "protocol": "TCP" } ], "volumeMounts": [ { "mountPath": "[format('/home/{0}/upload', parameters('sftpUser'))]", "name": "sftpvolume", "readOnly": false } ] } } ], "osType": "Linux", "ipAddress": { "type": "Public", "ports": [ { "port": 22, "protocol": "TCP" } ], "dnsNameLabel": "[parameters('containerGroupDNSLabel')]" }, "restartPolicy": "OnFailure", "volumes": [ { "name": "sftpvolume", "azureFile": { "readOnly": false, "shareName": "[parameters('existingFileShareName')]", "storageAccountName": "[parameters('existingStorageAccountName')]", "storageAccountKey": "[listKeys(extensionResourceId(format('/subscriptions/{0}/resourceGroups/{1}', subscription().subscriptionId, parameters('existingStorageAccountResourceGroupName')), 'Microsoft.Storage/storageAccounts', parameters('existingStorageAccountName')), '2019-06-01').keys[0].value]" } } ] } } ], "outputs": { "containerDNSLabel": { "type": "string", "value": "[format('{0}.{1}.azurecontainer.io', reference(resourceId('Microsoft.ContainerInstance/containerGroups', variables('sftpContainerGroupName'))).ipAddress.dnsNameLabel, reference(resourceId('Microsoft.ContainerInstance/containerGroups', variables('sftpContainerGroupName')), '2019-12-01', 'full').location)]" } } } Parameters.json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#", "contentVersion": "1.0.0.0", "parameters": { "existingStorageAccountResourceGroupName": { "value": null }, "existingStorageAccountName": { "value": null }, "existingFileShareName": { "value": null }, "sftpUser": { "value": "sftp" }, "sftpPassword": { "value": null }, "location": { "value": "[resourceGroup().location]" }, "containerGroupDNSLabel": { "value": "[uniqueString(resourceGroup().id, deployment().name)]" } } } Deploy the ARM Templates using PowerShell or Azure CLI or Custom Template deployment using Azure Portal. Choose the subscription you want to create the sftp service in Create a new Resource Group It will automatically create a storage account Give a File Share Name Provide a SFTP user name Provide a SFTP password Wait till the deployment is done successfully Click on the container sftp-group Copy the FQDN from the container group Download WinScp from WinSCP :: Official Site :: Download Provide Hostname : FQDN for ACI; Port Number: 22; User Name and Password Click on Login 13. Drag and drop a file from the left side to the Right side. 14. Now, go to the Storage Account and Navigate to File share. The file appears on the file share.10KViews2likes5CommentsDeploy Smarter, Scale Faster – Secure, AI-Ready, Cost-Effective Kubernetes Apps at Your Fingertips!
In our previous blog post, we explored the exciting launch of Kubernetes Apps on Azure Marketplace. This follow-up blog will take you a step further by demonstrating how to programmatically deploy Kubernetes Apps using tools like Terraform, Azure CLI, and ARM templates. As organizations scale their Kubernetes environments, the demand for secure, intelligent, and cost-effective deployments has never been higher. By programmatically deploying Kubernetes Apps through Azure Marketplace, organizations can harness powerful security frameworks, cost-efficient deployment options, and AI solutions to elevate their Azure Kubernetes Service (AKS) and Azure Arc-enabled clusters. This automated approach significantly reduces operational overhead, accelerates time-to-market, and allows teams to dedicate more time to innovation. Whether you're aiming to strengthen security, streamline application lifecycle management, or optimize AI and machine learning workloads, Kubernetes Apps on Azure Marketplace provide a robust, flexible, and scalable solution designed to meet modern business needs. Let’s explore how you can leverage these tools to unlock the full potential of your Kubernetes deployments. Secure Deployment You Can Trust Certified and Secure from the Start – Every Kubernetes app on Azure Marketplace undergoes a rigorous certification process and vulnerability scans before becoming available. Solution providers must resolve any detected security issues, ensuring the app is safe from the outset. Continuous Threat Monitoring – After publication, apps are regularly scanned for vulnerabilities. This ongoing monitoring helps to maintain the integrity of your deployments by identifying and addressing potential threats over time. Enhanced Security with RBAC – Eliminates the need for direct cluster access, reducing attack surfaces by managing permissions and deployments through Azure Role-Based Access Control (RBAC). Lowering Cost of your Applications If your organization has Azure Consumption Commitment (MACC) agreements with Microsoft, you can unlock significant cost savings when deploying your applications. Kubernetes Apps available on the Azure Marketplace are MACC eligible and you can gain the following benefits: Significant Cost Savings and Predictable Expenses – Reduce overall cloud costs with discounts and credits for committed usage, while ensuring stable, predictable expenses to enhance financial planning. Flexible and Comprehensive Commitment Usage – Allocate your commitment across various Marketplace solutions that maximizes flexibility and value for evolving business needs. Simplified Procurement and Budgeting – Benefit from unified billing and streamlined procurement to driving efficiency and performance. AI-Optimized Apps High-Performance Compute and Scalability - Deploy AI-ready apps on Kubernetes clusters with dynamic scaling and GPU acceleration. Optimize performance and resource utilization for intensive AI/ML workloads. Accelerated Time-to-Value - Pre-configured solutions reduce setup time, accelerating progress from proof-of-concept to production, while one-click deployments and automated updates keep AI environments up-to-date effortlessly. Hybrid and Multi-Cloud Flexibility - Deploy AI workloads seamlessly on AKS or Azure Arc-enabled Kubernetes clusters, ensuring consistent performance across on-premises, multi-cloud, or edge environments, while maintaining portability and robust security. Lifecycle Management of Kubernetes Apps Automated Updates and Patching – The auto-upgrade feature keeps your Kubernetes applications up-to-date with the latest features and security patches, seamlessly applied during scheduled maintenance windows to ensure uninterrupted operations. Our system guarantees automated consistency and reliability by continuously reconciling the cluster state with the desired declarative configuration and maintaining stability by automatically rolling back unauthorized changes. CI/CD Automation with ARM Integration – Leverage ARM-based APIs and templates to automate deployment and configuration, simplifying application management and boosting operational efficiency. This approach enables seamless integration with Azure policies, monitoring, and governance tools, ensuring streamlined and consistent operations. Flexible Billing Options for Kubernetes Apps We support a variety of billing models to suit your needs: Private Offers for Upfront Billing - Lock in pricing with upfront payments to gain better control and predictability over your expenditures. Multiple Billing Models - Choose from flexible billing options to suit your needs, including usage-based billing, where you pay per core, per node, or other usage metrics, allowing you to scale as required. Opt for flat-rate pricing for predictable monthly or annual costs, ensuring financial stability and peace of mind. Programmatic Deployments of Apps There are several ways of deploying Kubernetes app as follows: - Programmatically deploy using Terraform: Utilize the power of Terraform to automate and manage your Kubernetes applications. - Deploy programmatically with Azure CLI: Leverage the Azure CLI for straightforward, command-line based deployments. - Use ARM templates for programmatic deployment: Define and deploy your Kubernetes applications efficiently with ARM templates. - Deploy via AKS in the Azure portal: Take advantage of the user-friendly Azure portal for a seamless deployment experience. We hope this guide has been helpful and has simplified the process of deploying Kubernetes. Stay tuned for more tips and tricks, and happy deploying! Additional Links: Get started with Kubernetes Apps: https://aka.ms/deployK8sApp. Find other Kubernetes Apps listed on Azure Marketplace: https://aka.ms/KubernetesAppsInMarketplace For Customer support, please visit: https://learn.microsoft.com/en-us/azure/aks/aks-support-help#create-an-azure-support-request Partner with us: If you are an ISV or Azure partner interested in listing your Kubernetes App, please visit: http://aka.ms/K8sAppsGettingStarted Learn more about Partner Benefits: https://learn.microsoft.com/en-us/partner-center/marketplace/overview#why-sell-with-microsoft For Partner Support, please visit: https://partner.microsoft.com/support/?stage=11.3KViews0likes0CommentsUnable to View Audit Logs
Hi all! I am once again coming to you, asking for assistance. We had a security alert in Azure and I was able to go all the way through to see what the issue was, BUT when I try to go into the "View Suspicious Activity" page I get the below. Now multiple users in my team get the same as me, but one user can see everything in here. He's not even in the resource with any permissions yet he can see these logs. Am I missing something really obvious? Or is this another fun little bug? Thanks in advance444Views0likes1CommentAzure Kubernetes Service Baseline - The Hard Way
Are you ready to tackle Kubernetes on Azure like a pro? Embark on the “AKS Baseline - The Hard Way” and prepare for a journey that’s likely to be a mix of command line, detective work and revelations. This is a serious endeavour that will equip you with deep insights and substantial knowledge. As you navigate through the intricacies of Azure, you’ll not only face challenges but also accumulate a wealth of learning that will sharpen your skills and broaden your understanding of cloud infrastructure. Get set for an enriching experience that’s all about mastering the ins and outs of Azure Kubernetes Service!43KViews8likes6CommentsAzure Logic App workflow (Standard) Resubmit and Retry
Hello Experts, A workflow is scheduled to run daily at a specific time and retrieves data from different systems using REST API Calls (8-9). The data is then sent to another system through API calls using multiple child flows. We receive more than 1500 input data, and for each data, an API call needs to be made. During the API invocation process, there is a possibility of failure due to server errors (5xx) and client errors (4xx). To handle this, we have implemented a "Retry" mechanism with a fixed interval. However, there is still a chance of flow failure due to various reasons. Although there is a "Resubmit" feature available at the action level, I cannot apply it in this case because we are using multiple child workflows and the response is sent back from one flow to another. Is it necessary to utilize the "Resubmit" functionality? The Retry Functionality has been developed to handle any Server API errors (5xx) that may occur with Connectors (both Custom and Standard), including client API errors 408 and 429. In this specific scenario, it is reasonable to attempt retrying or resubmitting the API Call from the Azure Logic Apps workflow. Nevertheless, there are other situations where implementing the retry and resubmit logic would result in the same error outcome. Is it acceptable to proceed with the Retry functionality in this particular scenario? It would be highly appreciated if you could provide guidance on the appropriate methodology. Thanks -Sri901Views0likes0Comments