Recent Discussions
Using Azure Firewall as a Gateway for All Outbound Traffic to the Internet
Hey everyone! I just uploaded a new guide on GitHub where I walk through setting up Azure Firewall in a classic Hub & Spoke scenario to manage all outbound internet traffic š. In this guide, you'll find step-by-step instructions on: Setting up the Hub & Spoke network architecture Configuring Azure Firewall to control and monitor outbound traffic This tutorial is part of the hub-and-spoke-playground project, which includes various scenarios and scripts to showcase the benefits of the hub-and-spoke network topology in Azure. You can explore more scenarios and resources in the projectās GitHub repository: https://github.com/nicolgit/hub-and-spoke-playground . Would love to hear your thoughts and feedback!7Views0likes0CommentsDNS proxy for splunk cloud net rule still denied
I have set up my azure FW for DNS proxy and changed the v-nets dns to the private IP of the FW. I have the firewall pointing to custom dns servers. I created a FW rule to allow port 9997 to our splunk cloud instance from any IP. clients are still denied On a VM I can verify its using the FW IP, and I see dns queries in the logs. I'm not sure what I'm missing.18Views0likes1CommentAVD on Azure Local Cost Help
We're beginning to run some of our environment on AVD hosted on Azure Local on prem. The solution is working very well for our needs. However, we're finding the costs to be prohibitively high for scaling to 100's of users with personal sessions. After some research the idea of hibernating VM's was suggested but it appears that is not an option on azure local based AVD deployments. What other ways do we have to shut down VM's when they're idle? We're hoping to not lose the users session state (Hibernate) but I'm running out of ideas to get that goal. Our users are logged in between 6-10 hours a day 5 days a week, so paying for constant run time on the VMs is a lot of cost that isn't required. Thanks for any feedback!22Views0likes1CommentBlog about Automating Vacation Requests with Azure Logic Apps and SharePoint
In todayās fast-paced business environment, automation plays a vital role in improving operational efficiency. One of the most common yet time-consuming HR processes is managing employee vacation requests. Fortunately, Azure Logic Apps offers a low-code/no-code solution to automate this workflow efficiently and with minimal setup. In this blog post, Iāll walk you through how to use Azure Logic Apps to build an automated vacation request workflow, integrating with services like Outlook, SharePoint, and Microsoft Teams. https://dellenny.com/automating-vacation-requests-with-azure-logic-apps-and-sharepoint/23Views0likes0CommentsAVD with FSLogix - profiles not loading
Setup AVD with FSLogix several months back and profiles had been loading fine. About a month ago, profiles stopped loading and the logs show "Account restrictions are preventing this user from signing in. For example: blank passwords aren't allowed, sign-in times are limited, or a policy restriction has been enforced." This is in regards to connecting to the profile share path. If I manually try to go to that path, I receive the same error. The accounts do have a password, so it shouldn't be anything to do with a blank password. There are no sign-in time restrictions enforced on these accounts. What's left is a "policy restriction", which is kind of vague. Things I've tried: update to latest FSLogix verify permissions on profile storage enabled these in local policy: Computer Configuration > Windows Settings > Security Settings > Local Policies > Security Options āAccounts: Limit local account use of blank passwords to console logon onlyā = disabled Computer Configuration > Administrative Templates > System > Credentials Delegation āRestrict delegation of credentials to remote serversā = disabled I have a ticket open with MS support for 3 weeks now, but thus far they've been completely useless.83Views0likes10CommentsScript or Query for Management Group Compliance Statistics
I've been trying to reproduce the Azure Portal Compliance statistics for a Management Group in a PowerShell script or Resource Graph query without much luck. What I'd like to do is reproduce the numbers like compliance percentage, number of compliant / non-compliant resources, in the portal display. And run a daily script or query to track the numbers over time. (Without doing screen shots every day.) Just to be clear, I've attached a screenshot of a compliance screen for management group TEST1. I want to automate calculation of the Overall Resource Compliance (46%, 317 out of 692), and the policies/initiatives compliance state and resource compliance percentages at the bottom of the screen. I'm only interested in the resource compliance percentages below a threshold like 90% in order to help guide our remediation efforts. I've found several scripts and resource graph queries online but none seem to address management group scope. And even the ones that produce numbers for subscription scope don't seem to match the portal numbers. Has anyone successfully reproduced the portal MG compliance numbers with a script or quiery? Or, is it possible to obtain the logic behind the portals' MG scope compliance code? Seems like we should be able to reproduce the numbers shown by the console. Thanks.12Views0likes0Commentsbypass of MFA for Admin portals
Hello, I have a conditional access policy that bypasses MFA for custom enterprise apps when working from our trusted IPs. Since this policy is working as designed and expected, I thought it would be a simple matter to add the admin portal apps to it so sites like portal.azure.com are also bypassed. But for some reason it doesn't work even though sign-in logs reflect that the password only policy is indeed being applied to the sign in. Is there something additional I need to do? Are admin portals hardcoded for MFA? I have included some screenshots of the policy and logs for review. Thanks,29Views0likes3Commentsduplicate fslogix profile issue
There are two profile for few users one is starting with sid_username and another with username_sid for pooled azure virtual desktop. We want all user profile having naming convention username_sid for all users. Below are the settings configured: - -Swap directory name components is enabled -flipflopprofiledirectory Name is set to 1 Any recommendation will be helpful.46Views0likes8CommentsAzure app service getting restarted abruptly
I have an Azure app service with app service plan P1mv3 : 1. We have deployed the .net 8 web api project which has a background service as well. Background service does below things - Get the journal data from one of our on-premises endpoint for 1700 journals. Generate the embeddings for all the journal names in the batches of 100 with a delay of 5 seconds after each batch using Azure open AI. We use these embeddings for vector search in cosmos db to better search by journal title. Delete all the records from existing cosmos DB container in the batches of 100 with a delay of 5 seconds after each batch. We do this as we need to insert the fresh data each week. Insert all the records with embeddings generated in step-2 in cosmos DB container in the batches of 100 with a delay of 10 seconds after each batch. The problem is once we deploy this to app service after verifying that everything works fine on local system, the app service just generates 800/1000 out of 1700 embeddings and just restarts. We can see the logs as "Hosting environment: Production", "Content root path: c:\home\site\wwwroot" etc after our custom logs depicting the progress to generate the embeddings. e.g. Progress: 1000/1700 items embedding results generated.24Views0likes1CommentCompute_Management Intent duplicates and causes workloads to loose connectivity
Hi. I have a situation where we are running a 2-noded Azure Local 23H2 with the latest april 2025 updates installed. We have 2 Intents: Compute_management and storage. On one of the nodes (and only on this node, the other node works fine), Network ATC sometimes duplicates the compute_management intent and creates a new VMSwitch. The old VMSwitch is still present and changes to Internal mode. The new VMSwitch is external but without the NIC attached. The issue happens always after node reboot and also if we live migrate any workload to the affected node. In Azure on the network intent we see the error: physical network adapter not found Both servers are running identitcal hardware (Lenovo SR650) and the NIC for compute_management is Intel X722-T2 1Gb 2port We are working with Microsoft support to find a solution but so far Microsoft have only collected logs and we are waiting for response. I hope that someone here in the community have an idea for a solution.7Views0likes0CommentsChoosing the Right Scaling Strategy for Your Kubernetes Workloads: Karpenter, KEDA, and Azure Arc
Overview Scalability is a cornerstone of cloud-native architecture. In the Kubernetes ecosystem, autoscaling strategies are evolving to meet the diverse needs of cloud workloadsāfrom node provisioning to event-driven pod scaling and multi-cloud management. This post compares three complementary toolsāKarpenter, KEDA, and Azure Arcāand how they integrate with Azure-native services to improve scalability, efficiency, and control. 1. Karpenter: Intelligent Node Autoscaling Karpenter is an open-source node provisioning tool designed to improve scheduling efficiency and reduce infrastructure cost. It works by launching right-sized compute resources in response to unschedulable pods, optimizing for speed and cost over traditional Cluster Autoscaler methods. Key Features Application-aware scheduling that honors taints, tolerations, and affinities. Fine-grained provisioning that scales nodes precisely to workload requirements. Improves resource utilization and cost-efficiency. Best For Dynamic compute provisioning on Amazon EKS (AWS). Workloads with varied and unpredictable resource demands. Scenarios requiring faster scaling than Cluster Autoscaler. Limitations Currently limited to AWS environments. Requires deployment within a managed node group. Read: Karpenter Documentation 2. KEDA: Event-Driven Pod Scaling for Kubernetes KEDA (Kubernetes Event-Driven Autoscaler) brings serverless-style autoscaling to Kubernetes by enabling scaling based on event sources. Instead of relying on CPU or memory metrics, KEDA can trigger pod autoscaling based on external systems like Azure Service Bus, Kafka, or custom metrics. Key Features Scales pods based on business/event metrics (queue length, message rate). Supports over 50+ scalers (Azure Service Bus, Kafka, Prometheus, etc.). Seamlessly integrates with Horizontal Pod Autoscaler (HPA). Best For Serverless, event-driven architectures. Use cases where demand is tied to queue length or stream activity. Real-time or bursty workloads like messaging and IoT. Security Leverages native Kubernetes RBAC and secrets management. KEDA Documentation 3. Azure Arc: Hybrid Management for Kubernetes Azure Arc isnāt an autoscalerābut it plays a strategic role in unifying management across hybrid and multi-cloud environments, including Kubernetes clusters. Key Features Extends Azure control plane to any Kubernetes cluster. Enables consistent governance, security policies, and CI/CD pipelines. Integrates with tools like Azure Monitor, Defender for Cloud, and Azure Policy. Best For Enterprises managing clusters across on-prem, edge, and other clouds. Teams needing to apply centralized Azure governance across distributed environments. Complementary With KEDA (for event-driven scaling). Karpenter (for node-level scaling on AWS). Azure Arc-enabled Kubernetes Overview Feature Comparison Matrix Capability Karpenter KEDA Azure Arc Primary Function Node autoscaling Pod autoscaling (event-based) Management and governance Cloud Support AWS only Multi-cloud Azure, any infrastructure Best For Cost-efficient compute usage Serverless/event-driven apps Hybrid/multi-cloud governance Integration with Azure No Yes (via Azure Event Hubs, etc.) Full support Security Kubernetes-native Kubernetes-native Azure-native + Kubernetes security š§ Deployment Resources š¹ KEDA on Azure Kubernetes Service (AKS) Deploy KEDA on AKS š¹ Azure Arc with AKS or Edge Kubernetes Azure Arc-enabled Kubernetes Overview š¹ Self-Hosted GitHub Runners with Azure Container Apps (KEDA-based) Tutorial: Run GitHub Actions Runners with Azure Container Apps Jobs š¹ GitHub Actions Runner on AKS with Autoscale Sample: GitHub Runner on AKS with KEDA š Conclusion There is no one-size-fits-all solution for Kubernetes autoscaling. Karpenter, KEDA, and Azure Arc each serve distinct roles: Use Karpenter for dynamic and cost-efficient node autoscaling on AWS. Use KEDA for scaling based on real-world signals like queue length and event spikesāespecially on Azure. Use Azure Arc for consistent governance, visibility, and policy enforcement across all your Kubernetes environments. In many real-world scenarios, combining these tools unlocks the best outcomes.24Views0likes0CommentsUnable to trigger function app while using managed identity for the storage account connection
I am trying to create an Azure Function of BlobTrigger type, which needs to be triggered on dropping files in the storage account say filessa. Due to policy restriction the storage account cannot use shared access key. I am unable to trigger the function app dropping a file into a container. I see intermittently an error in the function app logs No valid combination of account information found. assembly : Azure.Storage.Blobs, Version=12.23.0.0, Culture=neutral, PublicKeyToken=9279e12e44c8 method : Azure.Storage.StorageConnectionString+<>c.<Parse>b__67_0 outerType : Microsoft.Azure.WebJobs.Host.Indexers.FunctionIndexingException outerMessage: Error indexing method 'Functions.SPAREventGridBlobTrigger' innermostMessage: No valid combination of account information found. I am referring to Configuring Azure Blob Trigger Identity Based Connection and have created the environment settings and assigned required roles to storage accounts (function App's storage account, say fnsa and the storage account which is used to upload the file to trigger the function app, filessa) as mentioned in this article. This is my simple code [Function(nameof(SPAREventGridBlobTrigger))] public async Task Run([BlobTrigger("samples-workitems/{name}", Source = BlobTriggerSource.EventGrid, Connection = "filessa_STORAGE")] Stream stream, string name) { using var blobStreamReader = new StreamReader(stream); var content = await blobStreamReader.ReadToEndAsync(); Console.WriteLine("Hello from Jey Jey Jey"); _logger.LogInformation($"C# Blob Trigger (using Event Grid) processed blob\n Name: {name} \n Data: {content}"); } I have assigned roles to the storage account filessa Storage Blob Data Owner and Storage Queue Data Contributor for the Azure Function identity. and assigned roles to the storage account fnsa Storage Blob Data Contributor for the Azure Function identity. (Actually I ended up adding many other roles like Storage Account Contributor, Storage Blob Data Reader and similar too to both storage accounts) Please advice me to on the items to be added in the environment settings. 1. the name and value of the connection of the storage account, filessa 2. the name and value of the connection of the storage account, fnsa 3. other items that needs to be mandatorily added to make it work I have tried added items like AzureWebJobsStorage, AzureWebJobsStorage__accountName, AzureWebJobsStorage__blobServiceUri, ..., AzureWebJobsfilessa_STORAGE, filessa_STORAGE. I have also referred to this microsoft documentation Tutorial: Trigger Azure Functions on blob containers using an event subscription ; tried adding the EventSubscription in the storage account filessa. The webhook https://FA-SPAREG-FA.azurewebsites.net/runtime/webhooks/blobs?functionName=Host.Functions.SPAREventGridBlobTrigger&code=_MPRFuo9sdEg== in Postman with POST returned back error Please help me with all the required environment settings to be added in the function app in Azure and any other suggestion or steps I have missed here to make this work.36Views0likes1CommentAzure Entra External ID - Password policy
Hi All, I am investigating using Azure Entra External ID as an external identity provider for a web app but I want to be able to set the password policy for password reset etc but cant find anything in the documentation, I have posted on some other groups and my conclusion is you cant change the password complexity when using Azure Entra External ID, I wondered if someone could advise if this is correct and if so are there plans to add this, do you need additional licence, using this for various Saas projects and not being able to set your own complexity seems odd to me? Not sure what the etiquette is for multiple issues but I have another issue with Azure Entra External ID, when a user that is not registered and try to login in the message shown to the user is "You can't sign in here with a personal account. Use your work or school account instead.", this is incorrect and very misleading, it should be something like "No account with this email could be found" can I change the message or have I just configured wrong? Thanks in advance.55Views0likes3CommentsHPC precheck error
Hi All, We are going to install a windows 2019 single head node, but when precheck it show" Check if the account running setup is a domain account", and next it can't connect SQL server with windows authentication, all computer and account are domain account, I don't know what policy cause this warning.39Views0likes1CommentAzure role for managing Visual Studio subscribers
Granting Help Desk users the ability to manage and provisioning Visual Studio licenses from the VS admin centre. I prefer not to assign the User Access Administrator role; so I am looking on what are the key RBAC configuration only for the sole purpose of managing user license for Visual Studio. Out VS subscription is attached to an Azure sub. (https://manage.visualstudio.com)27Views0likes2CommentsHow integreate Azure IoT Hub with Azure Synapse in RealTime
Hello, I'm researching how to connect Azure IoT Hub with Azure Synapse, I've already used IoT Hub a bit but I don't have any knowledge of Synapse, it is also required that the data be in RT, so if someone has already done something similar or knows where I can find answers I would appreciate it. Have a good day.66Views0likes4CommentsUse of RSA keys restriction in AzureDevOps
Have a question regarding azuredevops limitation to use RSA tokens with 256 or 512 for ssh connections. These keys are deprecated and subjected to security risks. The limitation I am talking about is in the below document Use SSH key authentication - Azure Repos | Microsoft Learn. Is there any update for this limitation and if not, do we have any other options.24Views0likes1CommentDemystifying Gen AI Models - Transformers Architecture : 'Attention Is All You Need'
The Transformer architecture demonstrated that carefully designed attention mechanisms ā without the need for sequential recurrence ā could model language and sequences more effectively and efficiently. 1. Transformers Replace Recurrence Traditional models such as RNNs and LSTMs processed data sequentially. Transformers use self-attention mechanisms to process all tokens simultaneously, enabling parallelisation, faster training, and better handling of long-range dependencies. 2. Self-Attention is Central Each token considers (attends to) all other tokens to gather contextual information. Attention scores are calculated between every pair of input tokens, capturing relationships irrespective of their position. 3. Multi-Head Attention Enhances Learning Rather than relying on a single attention mechanism, the model utilises multiple attention heads. Each head independently learns different aspects of relationships (such as syntax or meaning). The outputs from all heads are then combined to produce richer representations. 4. Positional Encoding Introduced As there is no recurrence, positional information must be introduced manually. Positional encodings (using sine and cosine functions of varying frequencies) are added to input embeddings to maintain the order of the sequence. 5. Encoder-Decoder Structure The model is composed of two main parts: Encoder: A stack of layers that processes the input sequence. Decoder: A stack of layers that generates the output, one token at a time (whilst attending to the encoder outputs). 6. Layer Composition Each encoder and decoder layer includes: Multi-Head Self-Attention Feed-Forward Neural Network (applied to each token independently) Residual Connections and Layer Normalisation to stabilise training. 7. Scaled Dot-Product Attention Attention scores are calculated using dot products between queries and keys, scaled by the square root of the dimension to prevent excessively large values, before being passed through a softmax. 8. Simpler, Yet More Powerful Despite removing recurrence, the Transformer outperformed more complex architectures such as stacked LSTMs on translation tasks (for instance, English-German). Training is considerably quicker (thanks to parallelism), particularly on long sequences. 9. Key Achievement Transformers became the state-of-the-art model for many natural language processing tasks ā paving the way for later innovations such as BERT, GPT, T5, and others. The latest breakthrough in generative AI models is owed to the development of the Transformer architecture. Transformers were introduced in the Attention is all you need paper by Vaswani, et al. from 2017.87Views0likes0CommentsCan disable Two-factor Authentication via Mobile
When sign-in Teams Essential account, Admin Center or Azure everytime, those need to verify code sent by mobile at input password later. I tried add an Email Authenticate Method of Security Info but don't work. Tried to disabled the function in Extra-ID also don't working. Have any method use Email instead of Mobile to verify? Regards, Diamond121Views0likes3Comments
Recent Blogs
- Establishing redundant network connectivity is vital to ensuring the availability, reliability, and performance of workloads operating in hybrid and cloud environments. Proper planning and implementa...May 09, 2025481Views2likes0Comments
- 2 MIN READNextflow is one of the most widely adopted open-source workflow orchestrators in the scientific research domain. In genomics, a pipeline refers to a series of computational steps designed to analyze ...May 09, 202553Views0likes0Comments