Automation & Control
85 TopicsBYOPI - Design your own custom private AI Search indexer with no code ADF (SQLServer on VM example)
Executive Summary Building a fully private search indexing solution using Azure Data Factory (ADF) to sync SQL Server data from private VM to Azure AI Search is achievable but comes with notable complexities and limitations. This blog shares my journey, discoveries, and honest assessment of the BYOPI (Build Your Own Private Indexer) architecture. Architectural flow: Table of Contents Overall Setup How ADF works in this approach with Azure AI Search Challenges - discovered Pros and Cons: An Honest Assessment Conclusion and Recommendations 1. Overall Setup: Phase 1: Resource Group & Network Setup : create resource group and vNET (virtual network) in any region of your choice Phase 2: Deploy SQL Server VM: Phase 3: Create Azure Services - ADF (Azure Data Factory), Azure AI Search and AKV (Azure Key Vault) service from portal or from your choice of deployment. Phase 4: Create Private Endpoints for all the services in their dedicated subnets: Phase 5: Configure SQL Server on VM : connect to VM via bastion and setup database, tables & SP: Sample metadata used as below: CREATE DATABASE BYOPI_DB; GO USE BYOPI_DB; GO CREATE TABLE Products ( ProductId INT IDENTITY(1,1) PRIMARY KEY, ProductName NVARCHAR(200) NOT NULL, Description NVARCHAR(MAX), Category NVARCHAR(100), Price DECIMAL(10,2), InStock BIT DEFAULT 1, Tags NVARCHAR(500), IsDeleted BIT DEFAULT 0, CreatedDate DATETIME DEFAULT GETDATE(), ModifiedDate DATETIME DEFAULT GETDATE() ); CREATE TABLE WatermarkTable ( TableName NVARCHAR(100) PRIMARY KEY, WatermarkValue DATETIME ); INSERT INTO WatermarkTable VALUES ('Products', '2024-01-01'); CREATE PROCEDURE sp_update_watermark @TableName NVARCHAR(100), @NewWatermark DATETIME AS BEGIN UPDATE WatermarkTable SET WatermarkValue = @NewWatermark WHERE TableName = @TableName; END; INSERT INTO Products (ProductName, Description, Category, Price, Tags) VALUES ('Laptop Pro', 'High-end laptop', 'Electronics', 1299.99, 'laptop,computer'), ('Office Desk', 'Adjustable desk', 'Furniture', 599.99, 'desk,office'), ('Wireless Mouse', 'Bluetooth mouse', 'Electronics', 29.99, 'mouse,wireless'); Phase 6: Install Self-Hosted Integration Runtime Create SHIR in ADF: Go to ADF resource in Azure Portal Click "Open Azure Data Factory Studio" Note: You need to access from a VM in the same VNet or via VPN since ADF is private In ADF Studio, click Manage (toolbox icon) Select Integration runtimes → "+ New" Select "Azure, Self-Hosted" → "Self-Hosted" Name: SHIR-BYOPI or of your choice Click "Create" Copy Key1 (save it) Install SHIR on VM In the VM (via Bastion): Open browser, go to: https://www.microsoft.com/download/details.aspx?id=39717 Download and install Integration Runtime During setup: Launch Configuration Manager Paste the Key1 from Step 14 Click "Register" Wait for "Connected" status Phase 7: Create Search Index through below powershell script and saving it as search_index.ps1 $searchService = "search-byopi" $apiKey = "YOUR-ADMIN-KEY" $headers = @{ 'api-key' = $apiKey 'Content-Type' = 'application/json' } $index = @{ name = "products-index" fields = @( @{name="id"; type="Edm.String"; key=$true} @{name="productName"; type="Edm.String"; searchable=$true} @{name="description"; type="Edm.String"; searchable=$true} @{name="category"; type="Edm.String"; filterable=$true; facetable=$true} @{name="price"; type="Edm.Double"; filterable=$true} @{name="inStock"; type="Edm.Boolean"; filterable=$true} @{name="tags"; type="Collection(Edm.String)"; searchable=$true} ) } | ConvertTo-Json -Depth 10 Invoke-RestMethod ` -Uri "https://$searchService.search.windows.net/indexes/products-index?api-version=2020-06-30" ` -Method PUT ` -Headers $headers ` -Body $index Phase 8: Configure AKV & ADF Components - Link AKV and ADF for secrets Create Key Vault Secrets Navigate to kv-byopi (created AKV resource) in Portal Go to "Access policies" Click "+ Create" Select permissions: Get, List for secrets Select principal: adf-byopi-private Create Go to "Secrets" → "+ Generate/Import": Name: sql-password, Value: <> Name: search-api-key, Value: Your search key Create Linked Services in ADF Access ADF Studio from the VM (since it's private): Key Vault Linked Service: Manage → Linked services → "+ New" Search "Azure Key Vault" Configure: Name: LS_KeyVault Azure Key Vault: kv-byopi Integration runtime: AutoResolveIntegrationRuntime Test connection → Create SQL Server Linked Service: "+ New" → "SQL Server" Configure: Name: LS_SqlServer Connect via: SHIR-BYOPI Server name: localhost Database: BYOPI_DB Authentication: SQL Authentication User: sqladmin Password: Select from Key Vault → LS_KeyVault → sql-password Test → Create Azure Search Linked Service: "+ New" → "Azure Search" Configure: Name: LS_AzureSearch URL: https://search-byopi.search.windows.net Connect via: SHIR-BYOPI - Important - use SHIR API Key: From Key Vault → LS_KeyVault → search-api-key Test → Create Phase 9: Create ADF Datasets and PipelineCreate Datasets SQL Products Dataset: Author → Datasets → "+" → "New dataset" Select "SQL Server" → Continue Select "Table" → Continue Properties: Name: DS_SQL_Products Linked service: LS_SqlServer Table: Select Products click OK Watermark Dataset: Repeat with: Name: DS_SQL_Watermark Table: WatermarkTable Search Dataset: "+" → "Azure Search" Properties: Name: DS_Search_Index Linked service: LS_AzureSearch Index name: products-index Create Pipeline Author → Pipelines → "+" → "Pipeline" Name: PL_BYOPI_Private From Activities → General, drag "Lookup" activity Configure Lookup 1: Name: LookupOldWatermark Settings: Source dataset: DS_SQL_Watermark Query: below sql SELECT WatermarkValue FROM WatermarkTable WHERE TableName='Products' - **First row only**: ✓ Add another Lookup: Name: LookupNewWatermark Query: below sql SELECT MAX(ModifiedDate) as NewWatermark FROM Products Add Copy Data activity: Name: CopyToSearchIndex Source: Dataset: DS_SQL_Products Query: sql SELECT CAST(ProductId AS NVARCHAR(50)) as id, ProductName as productName, Description as description, Category as category, Price as price, InStock as inStock, Tags as tags, CASE WHEN IsDeleted = 1 THEN 'delete' ELSE 'upload' END as [@search.action] FROM Products WHERE ModifiedDate > '@{activity('LookupOldWatermark').output.firstRow.WatermarkValue}' AND ModifiedDate <= '@{activity('LookupNewWatermark').output.firstRow.NewWatermark}' Sink: Dataset: DS_Search_Index Write behavior: Merge Batch size: 1000 Add Stored Procedure activity: Name: UpdateWatermark SQL Account: LS_SqlServer Stored procedure: sp_update_watermark Parameters: TableName: Products NewWatermark: @{activity('LookupNewWatermark').output.firstRow.NewWatermark} Connect activities with success conditions Phase 10: Test and Schedule Test Pipeline Click "Debug" in pipeline Monitor in Output panel Check for green checkmarks Create Trigger In pipeline, click "Add trigger" → "New/Edit" Click "+ New" Configure: Name: TR_Hourly Type: Schedule Recurrence: Every 1 Hour OK → Publish All Monitor Go to Monitor tab View Pipeline runs Check Trigger runs Your pipeline should look like this: Phase 11: Validation & Testing Verify Private Connectivity From the VM, run PowerShell: # Test DNS resolution (should return private IPs) nslookup adf-byopi-private.datafactory.azure.net # Should show private IP like : 10.0.2.x nslookup search-byopi.search.windows.net # Should show private IP like : 10.0.2.x nslookup kv-byopi.vault.azure.net # Should show private IP like : 10.0.2.x # Test Search $headers = @{ 'api-key' = 'YOUR-KEY' } Invoke-RestMethod -Uri "https://search-byopi.search.windows.net/indexes/products-index/docs?`$count=true&api-version=2020-06-30" -Headers $headers Test Data Sync (adding few records) and verify in search index: -- Add test record INSERT INTO Products (ProductName, Description, Category, Price, Tags) VALUES ('Test Product Private', 'Testing private pipeline', 'Test', 199.99, 'test,private'); -- Trigger pipeline manually or wait for schedule -- Then verify in Search index 2. How ADF works in this approach with Azure AI search: Azure AI Search uses a REST API for indexing or called as uploading. When ADF sink uploads data to AI Search, it's actually making HTTP POST requests: for example - POST https://search-byopi.search.windows.net/indexes/products-index/docs/index?api-version=2020-06-30 Content-Type: application/json api-key: YOUR-ADMIN-KEY { "value": [ { "@search.action": "upload", "id": "1", "productName": "Laptop", "price": 999.99 }, { "@search.action": "delete", "id": "2" } ] } Delete action used here is soft delete and not hard delete. pipeline query: SELECT CAST(ProductId AS NVARCHAR(50)) as id, -- Renamed to match index field ProductName as productName, -- Renamed to match index field Description as description, Category as category, Price as price, InStock as inStock, Tags as tags, CASE WHEN IsDeleted = 1 THEN 'delete' ELSE 'upload' END as [@search.action] -- Special field with @ prefix FROM Products WHERE ModifiedDate > '2024-01-01' ``` Returns this resultset: ``` id | productName | description | category | price | inStock | tags | @search.action ----|----------------|------------------|-------------|--------|---------|----------------|--------------- 1 | Laptop Pro | High-end laptop | Electronics | 1299 | 1 | laptop,computer| upload 2 | Office Chair | Ergonomic chair | Furniture | 399 | 1 | chair,office | upload 3 | Deleted Item | Old product | Archive | 0 | 0 | old | delete The @search.action Field - The Magic Control This special field tells Azure AI Search what to do with each document: @search.action What It Does When to Use What Happens If Document... upload Insert OR Update Most common - upsert operation Exists: Updates it<br>Doesn't exist: Creates it merge Update only When you know it exists Exists: Updates specified fields<br>Doesn't exist: ERROR mergeOrUpload Update OR Insert Safe update Exists: Updates fields<br>Doesn't exist: Creates it delete Remove from index To remove documents Exists: Deletes it<br>Doesn't exist: Ignores (no error) ADF automatically converts SQL results to JSON format required by Azure Search: { "value": [ { "@search.action": "upload", "id": "1", "productName": "Laptop Pro", "description": "High-end laptop", "category": "Electronics", "price": 1299.00, "inStock": true, "tags": "laptop,computer" }, { "@search.action": "upload", "id": "2", "productName": "Office Chair", "description": "Ergonomic chair", "category": "Furniture", "price": 399.00, "inStock": true, "tags": "chair,office" }, { "@search.action": "delete", "id": "3" // For delete, only ID is needed } ] } ADF doesn't send all records at once. It batches them based on writeBatchSize and each batch is a separate HTTP POST to Azure Search How ADF will detect new changes and run batches: Watermark will be updated after each successful ADF run to detect new changes as below: Handling different scenarios: Scenario 1: No Changes Between Runs: Run at 10:00 AM: - Old Watermark: 09:45:00 - New Watermark: 10:00:00 - Query: WHERE ModifiedDate > '09:45' AND <= '10:00' - Result: 0 rows - Action: Still update watermark to 10:00 - Why: Prevents reprocessing if changes come later Scenario 2: Bulk Insert Happens: Someone inserts 5000 records at 10:05 AM Run at 10:15 AM: - Old Watermark: 10:00:00 - New Watermark: 10:15:00 - Query: WHERE ModifiedDate > '10:00' AND <= '10:15' - Result: 5000 rows - Action: Process all 5000, update watermark to 10:15 Scenario 3: Pipeline Fails Run at 10:30 AM: - Old Watermark: 10:15:00 (unchanged from last success) - Pipeline fails during Copy activity - Watermark NOT updated (still 10:15:00) Next Run at 10:45 AM: - Old Watermark: 10:15:00 (still the last successful) - New Watermark: 10:45:00 - Query: WHERE ModifiedDate > '10:15' AND <= '10:45' - Result: Gets ALL changes from 10:15 to 10:45 (30 minutes of data) - No data loss! Note: There is still room for improvement by refining this logic to handle more advanced scenarios. However, I have not examined the logic in depth, as the goal here is to review how the overall setup functions, identify its limitations, and compare it with the indexing solutions available in AI Search. 3. Challenges - disovered: When I tried to set out to build a private search indexer for SQL Server data residing on an Azure VM with no public IP, the solution seemed straightforward: use Azure Data Factory to orchestrate the data movement to Azure AI Search. The materials made it sound simple. The reality? It's possible, but the devil is in the details. What We Needed: ✅ SQL Server on private VM (no public IP) ✅ Azure AI Search with private endpoint ✅ No data over public internet ✅ Support for full CRUD operations ✅ Near real-time synchronization ✅ No-code/low-code solution Reality Check: ⚠️ DELETE operations not natively supported in ADF sink ⚠️ Complex networking requirements ⚠️ Higher costs than expected ⚠️ Significant setup complexity ✅ But it IS possible with workarounds Components Required Azure VM: ~$150/month (D4s_v3) Self-Hosted Integration Runtime: Free (runs on VM) Private Endpoints: ~$30/month (approx 3 endpoints) Azure Data Factory: ~$15-60/month (depends on frequency) Azure AI Search: ~$75/month (Basic tier) Total: ~$270-315/month** The DELETE Challenge: Despite Azure AI Search REST API fully supporting delete operations via @search.action, ADF's native Azure Search sink does NOT support delete operations. This isn't clearly documented and catches many architects off guard. -- This SQL query with delete action SELECT ProductId as id, CASE WHEN IsDeleted = 1 THEN 'delete' ELSE 'upload' END as [@search.action] FROM Products -- Will NOT delete documents in Azure Search when using Copy activity -- The @search.action = 'delete' is ignored by ADF sink! Nevertheless, there is a workaround using the Web Activity approach or by calling the REST API from the ADF side to perform the delete operation. { "name": "DeleteViaREST", "type": "Web", "typeProperties": { "url": "https://search.windows.net/indexes/index/docs/index", "method": "POST", "body": { "value": [ {"@search.action": "delete", "id": "123"} ] } } } Development Challenges No Direct Portal Access: With ADF private, you need: Jump box in the same VNet VPN connection Bastion for access Testing Complexity: Can't use Postman from local machine Need to test from within VNet Debugging requires multiple tools 4. Pros and Cons: An Honest Assessment: Pros: Security: Complete network isolation Compliance: Meets strict requirements No-Code: Mostly configuration-based Scalability: Can handle large datasets Monitoring: Built-in ADF monitoring Managed Service: Microsoft handles updates Cons: DELETE Complexity: Not natively supported Cost: Higher than expected Setup Complexity: Many moving parts Debugging: Difficult with private endpoints Hidden Gotchas: - SHIR requires Windows VM (Linux in preview) - Private endpoint DNS propagation delays - ADF Studio timeout with private endpoints - SHIR auto-update can break pipelines 5. Conclusion and Recommendations: When to Use BYOPI: ✅ Good Fit: Strict security requirements Needs indexing from an un-supported scenarios for example SQL server residing on private VM Budget > $500/month Team familiar with Azure networking Read-heavy workloads ❌ Poor Fit: Simple search requirements Budget conscious Need real-time updates Heavy DELETE operations Small team without Azure expertise BYOPI works, but it's more complex and expensive than initially expected. The lack of native DELETE support in ADF sink is a significant limitation that requires workarounds. Key Takeaways It works but requires significant effort DELETE (hard) operations need workarounds Costs will be higher than expected Complexity is substantial for a "no-code" solution Alternative solutions might be better for many scenarios Disclaimer: The sample scripts provided in this article are provided AS IS without warranty of any kind. The author is not responsible for any issues, damages, or problems that may arise from using these scripts. Users should thoroughly test any implementation in their environment before deploying to production. Azure services and APIs may change over time, which could affect the functionality of the provided scripts. Always refer to the latest Azure documentation for the most up-to-date information. Thanks for reading this blog! I hope you've found this approach of creating own private indexing solution for Azure AI Search (BYOPI) useful 😀27Views0likes0CommentsRHEL In-place upgrades and Azure Update Manager
Following the process in this article will cause a disconnection between the data plane and the control plane of the virtual machine (VM). Azure capabilities such as Auto guest patching, Auto OS image upgrades, Hotpatching, and Azure Update Manager won't be available. To utilize these features, it's recommended to create a new VM using your preferred operating system instead of performing an in-place upgrade. According to https://learn.microsoft.com/en-us/azure/virtual-machines/workloads/redhat/redhat-in-place-upgrade, Azure Update Manager will break if any RHEL in-place upgrades are performed due to data/control plane disconnect. As a Microsoft product, this dilemma seems to defeat the benefits of AUM if you're someone like me who uses Redhat 'pet' VMs (as opposed to 'cattle' VMs) for work, and would frankly like to centralize all operations within the lifecycle of a Linux box inside the Azure tenant (patching, upgrading, rollback, any possible automation/application deployment etc). Unfortunately it would seem that this issue is largely something outside of the Azure customer's control. So, to anyone with esoteric Azure knowledge: what gives? Why and how is there a data disconnect between the control planes? What does the process look like from a bird's eye view? Given that the issue exists in the first place I would imagine that there is some kind of developmental contradiction, otherwise a feature like this probably would have been figured out a while ago (or that it is, as I suspect, simply not high priority enough despite a solution which may already exist in development). Furthermore, for those who may have more intimate info on the matter, does any sort of discussion or planning of a solution for this issue exist? With kindness, MadDogOfShimano34Views0likes1CommentAzure automation feature, improvements and bugs
This is by no means meant as critic as i love the Azure Automation Account product and its current features but these are thing that i would love to see as an offering/fixed for the future. Source Control (I can only speak for Github as that is what i use): Bugs: Tags being overwritten / removed by source controll both on full sync but also on incremential syncs (Already reported in case #2508010040002105) Features: Runbooks in source control is not being deleted in automation account when they have been deleted in source control. Support for diffrent sync types other than PowerShell 5.1 (Personally we will not consider upgrading to a newer version before there is source control implemented) Support for syncing the full repository instead of only a specific folder. So recursive source control for easier organisation in repositories I know we can setup multiple source control in azure automation but that seems a bit redundant and more maintance as the source control integration expires after 1 year does not matter if your PAT token is set to never expires Add support for syncing synopsis / description for at least PowerShell scripts so it grabs it directly from the given script and inputs it into the description field. Just the output of get-help .\ScriptName.ps1 Logging: Bugs: From time to time we see that logs is being displayed twice after each other so lets say you get the first result of logs. For this example lets say the first 10 entries in the All log page and scroll down further then the same 10 entries are repeated again and again and again this can also be seen by the time stamp of the log entry. (No new network requests for logs is being made so i believe this might be a bug in a javascript without being 100% certain) The most often time we see this bug is when a runbook is still running so it might be the log output stream that messes this up. And just to provide a picture for refrence without exposing anything sensitive the bug can be seen based on timestamps here: PowerShell 7 and above log outputs seems to contain some non escaped ASCI characters which makes the logs harder to read and also makes a log object being split into multiple log entries in Azure automation Log outputs Seems to have been fixed since i last tested Features: Searching for a specific job id in the general job list. Currently there is a work arround by going into a specific runbook - go to jobs - Press "Find job" and then you can lookup a jobid globally but the UI is not being updated correctly as displayed here: Would love to see a button here or be able to search for a jobid Formatting log outputs so you can do multi line output in a single log output entry E.G. "Write-output "New´r´nLine" So the output entry contains multiple lines for easier human readable log outputs Runbook page: Bugs: Searching for runbook names seems a bit buggy as far as i have seen there is 3 diffrent results for the end user Base image intialy looking at all runbooks One option is that it is not able to find a runbook with that name I have not been able to replicate it to get a picture of it. Another is that it displays a list of runbooks none of which matches what you searched for Third is that when you have searched for something and remove your search it does not return the original view Features: Ability to go to a previous job and re-run it/restart it with the same parameters. Think a bit like the way you can restart a github action run Scheduling: Features: More of a feature request but adding the schedule for a runbook directly in the code is awesome. (This is something we currently do by adding a parameter that contains the scheduling information then we have a runbook going over all our runbooks every hour and looking for this parameter and then constructing a schedule if it does not exist and links the runbook to the schedule and finally we also add a tag mentioning If the schedule name is enabled or not (*back to the issue in source control removing the tag*)) Hybrid workers: Features: I personally would love the ability to pause a hybrid worker in a hybrid worker group - Why? - Well we currently have 4 hybrid workers all running windows and have monthly patch windows and if a job hits a hybrid worker that is in patch then the jobs would go into a suspended state and not be picked up again Now we could remove the hybrid worker from the group but that would also remove the extension which would be reinstalled when added and then we would hit this https://learn.microsoft.com/en-us/azure/automation/troubleshoot/extension-based-hybrid-runbook-worker#scenario-runbooks-go-into-a-suspended-state-on-a-hybrid-runbook-worker-when-using-a-custom-account-on-a-server-with-user-account-control-uac-enabled This is an issue we originally started experiencing when we migrated from agent-based hybrid workers to extension based due to the discontinuation of agent-based. Another great reason is when needing to troubleshoot something on a specific hybrid worker or even when needing to update modules on a specific hybrid worker as this can not be done while the hybrid worker is still running jobs unless you use force or hit a time that it is not running or by manually stopping the service and then again end up with suspended jobs that is not being picked up again. Additional features that i personally would love to see as an offering: A front end for azure automation for end users (Think self-service portal) as some kind of add-on feature allowing a specific group of people to start a given runbook but supplying a more user friendly front end for it while also including some more limitations for end user groupings. I know there is already third party solutions for this and tbh I almost created one my self on my last maternity leave but my company chose not to pursue it further as the statement is we have 1 self service platform being servicenow can be viewed https://github.com/Mynster9361/Self-Service-Frontend-Azure-Automation just to give some inspiration if needed RBAC permissions for individual runbooks (as far as i remember this can already be done through cli) A General overview management blade for managing webhooks and the associated runbooks Currently there is no way to know which runbooks has an active / inactive webhook assigned to them as the only way to see this is by going to a runbook go to the webhooks blade and look if there is one or not. Personally i would love to see a blade on the general overview called "Webhooks" that looks similar to this table maybe: RunbookNameExpirationLast triggeredStatusRunbook1 (Clickable to get directly to the runbook)Custom_name_for_this webhook02/01/2022 16:00 EnabledRunbook2webhook211/11/2026 16:00TodayDisabledRunbook3webhook311/11/2027 16:00TodayEnabled Instead of webhook being a gentleman agreemnet on when you can enable and when you shouldn't enable and naming and such you have 1 general overview of all webhooks which would give value in regards to security and easier management of webhooks The things i see as most critical or highest on my wish list: To list 2 things i would like to see sooner rather than later Source control definitely needs to be updated/revamped so it both supports other languages/versions and also does not remove tags. Another thing that would be nice to have is to force it to follow source control so if i delete something that is in source control it is also deleted in azure automation Hybrid workers in maintenance mode so it completes running jobs and you are able to work on the hybrid worker whether it be bugs or just regular updates.76Views2likes0CommentsHow to deploy n8n on Azure App Service and leverage the benefits provided by Azure.
Lately, n8n has been gaining serious traction in the automation world—and it’s easy to see why. With its open-source core, visual workflow builder, and endless integration capabilities, it has become a favorite for developers and tech teams looking to automate processes without being locked into a single vendor. Given all the buzz, I thought it would be the perfect time to share a practical way to run n8n on Microsoft Azure using App Service. Why? Because Azure offers a solid, scalable, and secure platform that makes deployment easy, while still giving you full control over your container and configurations. Whether you're building a quick demo or setting up a production-ready instance, Azure App Service brings a lot of advantages to the table—like simplified scaling, integrated monitoring, built-in security features, and seamless CI/CD support. In this post, I’ll walk you through how to get your own n8n instance up and running on Azure—from creating the resource group to setting up environment variables and deploying the container. If you're into low-code automation and cloud-native solutions, this is a great way to combine both worlds. The first step is to create our Resource Group (RG); in my case, I will name it "n8n-rg". Now we proceed to create the App Service. At this point, it's important to select the appropriate configuration depending on your needs—for example, whether or not you want to include a database. If you choose to include one, Azure will handle the connections for you, and you can select from various types. In my case, I will proceed without a database. Proceed to configure the instance details. First, select the instance name, the 'Publish' option, and the 'Operating System'. In this case, it is important to choose 'Publish: Container', set the operating system to Linux, and most importantly select the region closest to you or your clients. Service Plan configuration. Here, you should select the plan based on your specific needs. Keep in mind that we are using a PaaS offering, which means that underlying compute resources like CPU and RAM are still being utilized. Depending on the expected workload, you can choose the most appropriate plan. Secondly—and very importantly—consider the features offered by each tier, such as redundancy, backup, autoscaling, custom domains, etc. In my case, I will use the Basic B1 plan. In the Database section, we do not select any option. Remember that this will depend on your specific requirements. In the Container section, under 'Image Source', select 'Other container registries'. For production environments, I recommend using Azure Container Registry (ACR) and pulling the n8n image from there. Now we will configure the Docker Hub options. This step is related to the previous one, as the available options vary depending on the image source. In our case, we will use the public n8n image from Docker Hub, so we select 'Public' and proceed to fill in the required fields: the first being the server, and the second the image name. This step is very important—use the exact same values to avoid issues. In the Networking section, we will select the values as shown in the image. This configuration will depend on your specific use case—particularly whether to enable Virtual Network (VNet) integration or not. VNet integration is typically used when the App Service needs to securely communicate with private resources (such as databases, APIs, or services) that reside within an Azure Virtual Network. Since this is a demo environment, we will leave the default settings without enabling VNet integration. In the 'Monitoring and Security' section, it is essential to enable these features to ensure traceability, observability, and additional security layers. This is considered a minimum requirement in production environments. At the very least, make sure to enable Application Insights by selecting 'Yes'. Finally, click on 'Create' and wait for the deployment process to complete. Now we will 'stop' our Web App, as we need to make some preliminary modifications. To do this, go to the main overview page of the Web App and click on 'Stop'. In the same Web App overview page, navigate through the left-hand panel to the 'Settings' section. Once there, click on it and select 'Environment Variables'. Environment variables are key-value pairs used to configure the behavior of your application without changing the source code. In the case of n8n, they are essential for defining authentication, webhook behavior, port configuration, timezone settings, and more. Environment variables within Azure specifically in Web Apps function the same way as they do outside of Azure. They allow you to configure your application's behavior without modifying the source code. In this case, we will add the following variables required for n8n to operate properly. Note: The variable APP_SERVICE_STORAGE should only be modified by setting it to true. Once the environment variables have been added, proceed to save them by clicking 'Apply' and confirming the changes. A confirmation dialog will appear to finalize the operation. Restart the Web App. This second startup may take longer than usual, typically around 5 to 7 minutes, as the environment initializes with the new configuration. Now, as we can see, the application has loaded successfully, and we can start using our own n8n server hosted on Azure. As you can observe, it references the host configured in the App Service. I hope you found this guide helpful and that it serves as a useful resource for deploying n8n on Azure App Service. If you have any questions or need further clarification, feel free to reach out—I'd be happy to help.4KViews4likes8CommentsCreating and Using an Azure Automation Custom Runtime Environment
A custom runtime environment is a way of defining a specific job execution environment for Azure Automation runbooks, including Microsoft Graph PowerShell SDK runbooks. In this article, we create a new environment for PowerShell V7.4, load in some SDK modules, switch a runbook from a system-generated environment, and run some code. https://office365itpros.com/2025/08/29/custom-runtime-environment/30Views0likes0CommentsScaling Smart with Azure: Architecture That Works
Hi Tech Community! I’m Zainab, currently based in Abu Dhabi and serving as Vice President of Finance & HR at Hoddz Trends LLC a global tech solutions company headquartered in Arkansas, USA. While I lead on strategy, people, and financials, I also roll up my sleeves when it comes to tech innovation. In this discussion, I want to explore the real-world challenges of scaling systems with Microsoft Azure. From choosing the right architecture to optimizing performance and cost, I’ll be sharing insights drawn from experience and I’d love to hear yours too. Whether you're building from scratch, migrating legacy systems, or refining deployments, let’s talk about what actually works.84Views0likes1CommentError Running Script in Runbook with System Assigned Managed Identity
Hello everyone, I could use some assistance, please. I'm encountering an error when trying to run a script within a runbook. I'm using PowerShell 5.1 with a system-assigned managed identity. The script works find without using the managed identiy via powershell outside of azure. Error: System.Management.Automation.ParameterBindingException: Cannot process command because of one or more missing mandatory parameters: Credential. at System.Management.Automation.CmdletParameterBinderController.PromptForMissingMandatoryParameters(Collection1 fieldDescriptionList, Collection1 missingMandatoryParameters) at System.Management.Automation.CmdletParameterBinderController.HandleUnboundMandatoryParameters I am using this script Connect-ExchangeOnline -ManagedIdentity -Organization domain removed for privacy reasons # Specify the user's mailbox identity $mailboxIdentity = "email address removed for privacy reasons" # Get mailbox configuration and statistics for the specified mailbox $mailboxConfig = Get-Mailbox -Identity $mailboxIdentity $mailboxStats = Get-MailboxStatistics -Identity $mailboxIdentity # Check if TotalItemSize and ProhibitSendQuota are not null and extract the sizes if ($mailboxStats.TotalItemSize -and $mailboxConfig.ProhibitSendQuota) { $totalSizeBytes = $mailboxStats.TotalItemSize.Value.ToString().Split("(")[1].Split(" ")[0].Replace(",", "") -as [double] $prohibitQuotaBytes = $mailboxConfig.ProhibitSendQuota.ToString().Split("(")[1].Split(" ")[0].Replace(",", "") -as [double] # Convert sizes from bytes to gigabytes $totalMailboxSize = $totalSizeBytes / 1GB $mailboxWarningQuota = $prohibitQuotaBytes / 1GB # Check if the mailbox size exceeds 90% of the warning quota if ($totalMailboxSize -ge ($mailboxWarningQuota * 0.0)) { # Send an email notification $emailBody = "The mailbox $($mailboxIdentity) has reached $($totalMailboxSize) GB, which exceeds 90% of the warning quota." Send-MailMessage -To "email address removed for privacy reasons" -From "email address removed for privacy reasons" -Subject "Mailbox Size Warning" -Body $emailBody -SmtpServer "smtp.office365.com" -Port 587 -UseSsl -Credential (Get-Credential) } } else { Write-Host "The required values(TotalItemSize or ProhibitSendQuota) are not available." }616Views0likes1CommentAzure DevOps REST API - tag DeploymentGroups' target
Hello everyone, I am trying to setup a function in PowerShell to be able to set tags on specific targets of a deploymentgroup, and for that I am using this documentation page: https://learn.microsoft.com/en-us/rest/api/azure/devops/distributedtask/targets/update?view=azure-devops-rest-7.0&tabs=HTTP#request-body I created the request body as described in the page like bellow: { "id": 541, "tags": [ "tag1-backendWithDb", "tag1-backendWithDb-active-node", "tag2-backendWithDb-database", "tag2-backendWithDb", "tag2-backendWithDb-active-node", "tag3-blazor", "tag3-blazor-active-node", "tag4-yarp", "tag4-yarp-active-node" ] } Than I do the following command : Invoke-RestMethod -Method Patch -Uri "$baseurl/distributedtask/deploymentgroups/$($DGid)/targets?api-version=6.0-preview.1" -Credential $cred -Body ($body | ConvertTo-Json) -ContentType 'Application/json' But then I get an error like this : Invoke-RestMethod: { "$id": "1", "innerException": null, "message": "Value cannot be null.\r\nParameter name: machinesToUpdate", "typeName": "System.ArgumentNullException, mscorlib", "typeKey": "ArgumentNullException", "errorCode": 0, "eventId": 0 } The problem is that the document is not specifying any parameter named 'machinesToUpdate'. What is it that I am missing here?Solved140Views0likes3CommentsComparision on Azure Cloud Sync and Traditional Entra connect Sync.
Introduction In the evolving landscape of identity management, organizations face a critical decision when integrating their on-premises Active Directory (AD) with Microsoft Entra ID (formerly Azure AD). Two primary tools are available for this synchronization: Traditional Entra Connect Sync (formerly Azure AD Connect) Azure Cloud Sync While both serve the same fundamental purpose, bridging on-prem AD with cloud identity, they differ significantly in architecture, capabilities, and ideal use cases. Architecture & Setup Entra Connect Sync is a heavyweight solution. It installs a full synchronization engine on a Windows Server, often backed by SQL Server. This setup gives administrators deep control over sync rules, attribute flows, and filtering. Azure Cloud Sync, on the other hand, is lightweight. It uses a cloud-managed agent installed on-premises, removing the need for SQL Server or complex infrastructure. The agent communicates with Microsoft Entra ID, and most configurations are handled in the cloud portal. For organizations with complex hybrid setups (e.g., Exchange hybrid, device management), is Cloud Sync too limited?469Views1like2CommentsAzure Form Recognizer Redaction Issue with Scanned PDFs and Page Size Variations
Hi all, I’m working on a PDF redaction process using Azure Form Recognizer and Azure Functions. The flow works well in most cases — I extract the text and bounding box coordinates and apply redaction based on that. However, I’m facing an issue with scanned PDFs or PDFs with slightly different page sizes. In these cases, the redaction boxes don’t align properly — they either miss the text or appear slightly off (above or below the intended area). It seems like the coordinate mapping doesn't match accurately when the document isn't a standard A4 size or has DPI inconsistencies. Has anyone else encountered this? Any suggestions on: Adjusting for page size or DPI dynamically? Mapping normalized coordinates correctly for scanned PDFs? Appreciate any help or suggestions!87Views0likes1Comment