Forum Widgets
Latest Discussions
Video Recording: Azure Architecture Best Practices
The video recording from the free online event where I was presenting together with Microsoft Cloud Solution Architect, Dominik Zemp, about Azure Architecture Best Practices is now available. In this session, you will learn about proven guidance that’s designed to help you, architect, create and implement the business and technology strategies necessary for your organization to succeed in the cloud. It provides best practices, documentation, and tools that cloud architects, IT professionals, and business decision-makers need to successfully achieve their short- and long-term objectives. We will be focusing on topics like the Cloud Adoption Framework and the new Enterprise-Scale landing zone architecture. Azure Architecture Best Practices Virtual Event Agenda: Introduction Why Azure Architecture? Introduction to the Cloud Adoption Framework What is Enterprise-Scale? Build landing zones with Enterprise-Scale Critical design areas Deployment using AzOps Demo Build on top of Enterprise-Scale – Well-Architected Framework for workloads and apps Q&A2.8KViews8likes2CommentsLeverage Azure Durable Functions to build lightweight ETL Data Jobs
This blog is co-authored by Dr. Magesh Kasthuri, Distinguished Member of Technical Staff (Wipro) and Sanjeev Radhakishin Assudani, Azure COE Principal Architect (Wipro). This blog post aims to provide you with insights into how Azure Durable functions can be considered as an alternate design choice to build lightweight Azure native solution for data ingestion and transformation. While the solution discussed in this blog pertains to a healthcare industry customer, the design approach presented here is generic and applicable across industries. The scenario A leading healthcare provider planned to modernize Medicare Auto Enrollment Engine (AEE) and Premium Billing capabilities to enable a robust, scalable, and cost-effective solution across their Medicare business line. One of the key requirements was to build an integration layer to their healthcare administration platform into its database which will process the benefit enrollment and maintenance of hundreds of JSON files. Proposed solution will ingest, transform, and load the data in their Database platform on a daily incremental file and monthly audit file basis. The challenge was to identify a most cost effective ETL data engine solution in the real-world scenario to do complex processing in the integration layer yet lightweight. Below is the list of possible solutions identified: o Azure Data Bricks o Mulesoft APIs o Azure Logic Apps o Azure Durable Functions After careful evaluation, Azure Durable Function was chosen to build the integration layer. The following objectives were identified: Azure Durable functions offer modernized and scalable solution for building and managing serverless workflows Lightweight data jobs can be implemented using durable functions and avoid heavy compute intensive services when not needed. Optimized performance to complete the end-to-end enrichment process within hours. Solution components In today's data-driven world, the ability to efficiently handle ETL (Extract, Transform, Load) jobs is crucial for any organization looking to gain insights from their data. Azure provides a robust platform to develop native solutions for ETL jobs, utilizing a combination of Azure Data Factory (ADF) pipelines, Azure Durable Functions, Azure SQL Database, and Azure Storage. This article will guide you through the detailed process of developing an Azure native solution for ETL jobs, encompassing data load, ingestion, transformation, and staging activities. This solution approach avoids Azure Data Lake (ADLS 2) or Databricks to avoid cost bulge or heavy weight architecture and also helps you to define a lightweight reference architecture for high load data processing jobs. Architecture Overview The architecture for an Azure native ETL solution involves several components working together seamlessly. The key components include: Azure Data Factory (ADF) Pipeline: Orchestrates data flow and automates ETL processes. Azure Durable Functions: Handles ingestion and transformation tasks using C# and .NET code. Azure SQL Database: Used for data enrichment and final storage. Azure Storage: To store raw feed files, manage staging activities and temporary data storage. Application Insights & Monitoring: Provides observability and activity tracking. Azure Durable Function Monitor: It provides UI to debug, monitor and manage the orchestration instances. Azure Key Vault: To store secrets like keys, connection strings. Architecture Diagram Azure Data Factory (ADF) Pipeline ADF serves as the backbone of the ETL process. It orchestrates the entire data flow, ensuring that data is moved efficiently from one stage to another. ADF pipelines can be scheduled to run at specific intervals or triggered by events, providing flexibility in managing ETL workflows. Azure Blob Storage Azure Blob Storage acts as the initial landing zone for raw feed data. It is highly scalable and cost-effective, making it ideal for storing large volumes of data. Data is loaded into Blob Storage from various sources, ready for further processing. Azure Durable Functions Durable Functions are a powerful feature of Azure Functions that allow for long-running, stateful operations. Using C# and .NET code, Durable Functions can perform complex data ingestion and transformation tasks. They provide reliability and scalability, ensuring that data processing is efficient and fault tolerant. Azure SQL Database Azure SQL Database is used for data enrichment and final storage. After the transformation process, data is loaded into the SQL database where it can be enriched with additional metadata and made ready for analytics and reporting. It provides high performance, security, and availability. Azure Storage for Staging Activities During the ETL process, intermediate data needs to be temporarily stored. Azure Storage plays a crucial role in managing these staging activities. It ensures that data is available for subsequent processing steps, maintaining the integrity and flow of the ETL pipeline. Observability and Monitoring Application Insights Application Insights is an essential tool for monitoring the health and performance of your ETL solution. It provides real-time insights into application performance, helping to identify and troubleshoot issues quickly. By tracking metrics and logs, you can ensure that your ETL processes are running smoothly and efficiently. Activity Tracking Activity tracking is crucial for understanding the flow and status of data through the ETL pipeline. Logging and monitoring tools can provide detailed information about each step in the process, allowing for better visibility and control. This ensures that any anomalies or failures can be detected and addressed promptly. Durable Function Monitor This is an important tool to list, monitor and debug the orchestrations inside the Azure Durable Function. We can configure this as an extension in Visual Studio code. It helps to view the different instances of orchestrators and activity functions. It also shows the time taken to execute them, this is important for tracking the performance of the different steps in the ETL process. We can also view the Azure Durable Function in the form of a function graph. Kudu Logs This traces the execution of the different orchestrators, activity functions and the native functions. This helps to see the exceptions raised, or whether there are replay happening for the orchestrators, activity functions. Best Practices for Implementing the Solution Here are some best practices to ensure the successful implementation of your Azure native ETL solution: Design for Scalability: Ensure that your solution can handle increasing data volumes and processing demands by leveraging Azure's scalable services. Optimize Data Storage: Use appropriate data storage solutions for different stages of the ETL process, balancing cost and performance. Implement Robust Monitoring: Use Application Insights, Durable Function Monitor and other monitoring tools to track performance and detect issues early. Ensure Data Security: Implement strong security measures to protect sensitive data at rest and in transit. Automate and Schedule Pipelines: Use ADF to automate and schedule ETL pipelines, reducing manual intervention and ensuring consistency. Use Durable Functions for Complex Tasks: Leverage Azure Durable Functions for long running and stateful operations, ensuring reliability and efficiency. By following these guidelines and leveraging Azure's powerful tools and services, you can develop a robust and efficient ETL solution that meets your data processing needs. Azure provides a flexible and scalable platform, enabling you to handle large data volumes and complex transformations with ease. Embrace the power of Azure to unlock the full potential of your data.515Views4likes1CommentBest Practices for Designing a Hub-and-Spoke Architecture in Azure
The Hub-and-Spoke architecture is a common networking model in Microsoft Azure, designed to improve security, manageability, and scalability for enterprises and cloud workloads. By centralizing network resources in a hub and connecting multiple spoke virtual networks (VNets), organizations can enforce governance while enabling controlled communication across workloads. However, designing an optimal Hub-and-Spoke architecture requires careful planning to ensure security, performance, and cost efficiency. This post will explore the best practices to help you build a robust and scalable architecture in Azure. 1. Understanding the Hub-and-Spoke Model In this architecture: The Hub serves as the central point for connectivity, hosting shared services like firewalls, VPN/ExpressRoute gateways, and identity services. The Spokes are individual VNets that connect to the hub, typically representing isolated workloads, applications, or business units. Peering is used to establish communication between the hub and spokes, with the option to enable or restrict direct spoke-to-spoke communication. Key Benefit: Centralized management of network traffic, security, and hybrid connectivity. 2. Designing an Effective Hub The hub is the backbone of your architecture, so it must be designed with scalability and security in mind: ✅ Use Azure Virtual WAN if you need a global-scale Hub-and-Spoke deployment with automated routing and traffic management. ✅ Leverage Azure Firewall for centralized security and to enforce traffic control between spokes. ✅ Implement Network Security Groups (NSGs) to restrict inbound/outbound traffic and define granular security policies. ✅ Optimize traffic flow with Route Tables (UDRs) to avoid asymmetric routing and performance bottlenecks. ✅ Ensure high availability by deploying redundant VPN or ExpressRoute gateways in active-active mode. Tip: Avoid placing unnecessary workloads in the hub to prevent performance degradation. 3. Managing Spoke Communication and Isolation Each spoke VNet should be logically and securely isolated while allowing required communication paths. ✅ Limit direct spoke-to-spoke communication by routing traffic through the hub unless specific business requirements demand otherwise. ✅ Use Private Endpoints to securely access PaaS services without exposing them to the public internet. ✅ Enforce Zero Trust principles by using Azure Private Link and restricting access to critical workloads. Tip: Avoid transitive peering unless absolutely necessary; use the hub to manage inter-spoke traffic. 4. Performance and Cost Optimization An efficient Hub-and-Spoke design ensures minimal latency and optimized costs. ✅ Use Accelerated Networking on VMs to enhance throughput and reduce network latency. ✅ Implement Azure Route Server to dynamically manage routes between the hub and spokes. ✅ Monitor network traffic with Azure Monitor and Traffic Analytics to detect bottlenecks and optimize network flow. ✅ Optimize ExpressRoute or VPN usage by choosing the right SKU based on bandwidth and redundancy needs. Tip: Reduce unnecessary traffic through NSG rules and route tables to avoid extra processing costs. 5. Governance and Automation To maintain consistency and reduce human errors, use automation and governance best practices: ✅ Deploy infrastructure as code (IaC) using ARM templates, Bicep, or Terraform for reproducible deployments. ✅ Enforce security policies with Azure Policy to ensure compliance with networking standards. ✅ Use Role-Based Access Control (RBAC) to define strict access levels for managing network resources. ✅ Monitor and log network activity using Azure Sentinel and Azure Monitor to detect anomalies. Tip: Automate network provisioning using Azure DevOps or GitHub Actions for efficiency and consistency. Final Thoughts A well-designed Hub-and-Spoke architecture in Azure provides centralized security, simplified management, and scalable connectivity. However, to maximize its benefits, it's essential to carefully plan network security, routing, and cost optimization while leveraging Azure’s built-in automation and monitoring tools. 🔹 What challenges have you faced when implementing a Hub-and-Spoke model in Azure? 🔹 What best practices have worked well for your organization? Let’s discuss in the comments! 🚀MercedesCustodio24Feb 19, 2025Copper Contributor1.3KViews4likes1CommentAzure Firewall : Deploy, Configure & Control network access
I have written a quick tutorial with videos about how to deploy and configure network access in Aure Firewall with Single VNet model. In this article we will see how to manage or restrict network access for a Azure VMs using Azure Firewall. We will also see how to restrict or limit access to websites, outbound IP, ports, protocols, etc., In the below mentioned articles, I have made a step by step tutorial of a test environment creation for learning purpose – you can follow the same, deploy the test setup and play with the rules to become familiar. Below is the high level design of the test environment that we will deploy in Azure. High Level Architecture Deployment Approach Here is the high level deployment approach for deploying Single V-Net test environment with azure firewall. For Detailed and step by step Instructions and videos- please refer the below articles. Create Resource Group Sign in to the Azure portal at https://portal.azure.com/. On the Azure portal menu, select Resource groups or search for and select Resource groups from any page. Then select Add. For Resource group name, enter <Jasparrow> For Subscription, select your subscription. For Resource group location, select a location. All other resources that you create must be in the same location. Select Create. Create Virtual Network & Add Subnet On the Azure portal menu or from the Home page, select Create a resource. Select Networking > Virtual network. For Subscription, select your subscription. For Resource group, select <jasparrow>. For Name, type Test-FW-VN. For Region, select the same location that you used previously. Select Next: IP addresses. For IPv4 Address space, type 10.0.0.0/16. Under Subnet, select default. For Subnet name type AzureFirewallSubnet. The firewall will be in this subnet, and the subnet name must be AzureFirewallSubnet. For Address range, type 10.0.1.0/26. Select Save. Next, create a subnet for the workload server. Select Add subnet. For Subnet name, type Workload-SN. For Subnet address range, type 10.0.2.0/24. Select Add. Select Review + create. Select Create. Create Virtual Machine Now create the workload virtual machine, and place it in the Workload-SN subnet. On the Azure portal menu or from the Home page, select Create a resource. Select Compute and then select Virtual machine. Windows Server 2019 Datacenter in the Featured list. Enter these values for the virtual machine: Under Inbound port rules, Public inbound ports, select None. Accept the other defaults and select Next: Disks. Accept the disk defaults and select Next: Networking. Make sure that Test-FW-VN is selected for the virtual network and the subnet is Workload-SN. For Public IP, select None. Accept the other defaults and select Next: Management. Select Off to disable boot diagnostics. Accept the other defaults and select Review + create. Review the settings on the summary page, and then select Create. Deploy Azure Firewall On the Azure portal menu or from the Home page, select Create a resource. Type firewall in the search box and press Enter. Select Firewall and then select Create. On the Create a Firewall page, use the following table to configure the firewall: Select Review + create. Review the summary, and then select Create to create the firewall.This will take a few minutes to deploy. After deployment completes, go to the <jasparrow> resource group, and select the Test-FW01 firewall. Note the firewall private and public IP addresses. You’ll use these addresses later. Creating a Default Route For the Workload-SN subnet, configure the outbound default route to go through the firewall. On the Azure portal menu, select All services or search for and select All services from any page. Under Networking, select Route tables. Select Add. For Name, type Firewall-route. For Subscription, select your subscription. For Resource group, select <jasparrow>. For Location, select the same location that you used previously. Select Create. Select Refresh, and then select the Firewall-route route table. Select Subnets and then select Associate. Select Virtual network > Test-FW-VN. For Subnet, select Workload-SN. Make sure that you select only the Workload-SN subnet for this route, otherwise your firewall won’t work correctly. Select OK. Select Routes and then select Add. For Route name, type fw-dg. For Address prefix, type 0.0.0.0/0. For Next hop type, select Virtual appliance.Azure Firewall is actually a managed service, but virtual appliance works in this situation. For Next hop address, type the private IP address for the firewall that you noted previously. Select OK. Creating Application Rule This is the application rule that allows outbound access to http://www.google.com. Open the <jasparrow>, and select the Test-FW01 firewall. On the Test-FW01 page, under Settings, select Rules. Select the Application rule collection tab. Select Add application rule collection. For Name, type App-Coll01. For Priority, type 200. For Action, select Allow. Under Rules, Target FQDNs, for Name, type Allow-Google. For Source type, select IP address. For Source, type 10.0.2.0/24. For Protocol:port, type http, https. For Target FQDNS, type http://www.google.com Select Add. Creating Network Rule This is the network rule that allows outbound access to two IP addresses at port 53 (DNS). Select the Network rule collection tab. Select Add network rule collection. For Name, type Net-Coll01. For Priority, type 200. For Action, select Allow. Under Rules, IP addresses, for Name, type Allow-DNS. For Protocol, select UDP. For Source type, select IP address. For Source, type 10.0.2.0/24. For Destination type select IP address. For Destination address, type 209.244.0.3,209.244.0.4These are public DNS servers operated by CenturyLink. For Destination Ports, type 53. Select Add. Creating NAT Rule Testing Traffic This rule allows you to connect a remote desktop to the Srv-Work virtual machine through the firewall. Select the NAT rule collection tab. Select Add NAT rule collection. For Name, type rdp. For Priority, type 200. Under Rules, for Name, type rdp-nat. For Protocol, select TCP. For Source type, select IP address. For Source, type *. For Destination address, type the firewall public IP address. For Destination Ports, type 3389. For Translated address, type the Srv-work private IP address. For Translated port, type 3389. Select Add. DNS Configuration & Testing For testing purposes in this tutorial, configure the server’s primary and secondary DNS addresses. This isn’t a general Azure Firewall requirement. On the Azure portal menu, select Resource groups or search for and select Resource groups from any page. Select the <jasparrow>resource group. Select the network interface for the Srv-Work virtual machine. Under Settings, select DNS servers. Under DNS servers, select Custom. Type 209.244.0.3 in the Add DNS server text box, and 209.244.0.4 in the next text box. Select Save. Restart the Srv-Work virtual machine. Test the firewall Now, test the firewall to confirm that it works as expected. Connect a remote desktop to firewall public IP address and sign in to the Srv-Work virtual machine. Open Internet Explorer and browse to https://www.google.com. Select OK > Close on the Internet Explorer security alerts.You should see the Google home page. Browse to https://www.microsoft.com.You should be blocked by the firewall. So now you’ve verified that the firewall rules are working: You can browse to the one allowed FQDN, but not to any others. You can resolve DNS names using the configured external DNS server. Reference http://jasparrow.info/2020/08/azure-firewall-deploy-configure-control-network-access/ https://www.youtube.com/watch?v=BDsfXizDPF4&list=PLjcu2kXIXMaAL43ZpMJz86DsV3xNQZhD3 Regards JasonJason_PrabhuAug 08, 2020Brass Contributor17KViews4likes0CommentsAzure AD test tenant
Hello Community, I'm stating this discussion because I likely wanted your input regarding the best way to build a test tenant in Azure. We have a Prod tenant and for some feature testing or some wide tenant configuration changes, we wanted to have a test tenant. This test tenant need to have some users (synched from on-prem by AD Connect) and have the same configuration as our prod tenant. Do you have any experiences, recommendations, processes in this type of configuration? Thanks for sharing your knowledge 🙂SolvedNicolasHonNov 01, 2021Brass Contributor39KViews3likes15CommentsPrivate AKS Deployment with Application Gateway: Leveraging Terraform and Azure Devops
Introduction This repository provides a comprehensive guide and toolkit for creating a private Azure Kubernetes Service (AKS) cluster using Terraform. It showcases a detailed process for deploying a private AKS cluster with robust integrations including Azure Container Registry, Azure Storage Account, Azure Key Vault, and more, using Terraform as the infrastructure as code (IaC) tool. Repository For complete details and Terraform scripts, visit my GitHub repository at https://github.com/yazidmissaoui/PrivateAKSCluster-Terraform. This project mirrors the architecture suggested by Microsoft, providing a practical implementation of their recommended private AKS cluster setup. For further reference on the Microsoft architecture, visit their guide here: https://learn.microsoft.com/en-us/azure/architecture/example-scenario/aks-agic/aks-agic. Description This sample shows how to create a https://docs.microsoft.com/en-us/azure/aks/private-clusters using: https://www.terraform.io/intro/index.html as infrastructure as code (IaC) tool to build, change, and version the infrastructure on Azure in a safe, repeatable, and efficient way. https://docs.microsoft.com/en-us/azure/devops/pipelines/get-started/what-is-azure-pipelines?view=azure-devops to automate the deployment and undeployment of the entire infrastructure on multiple environments on the Azure platform. In a private AKS cluster, the API server endpoint is not exposed via a public IP address. Hence, to manage the API server, you will need to use a virtual machine that has access to the AKS cluster's Azure Virtual Network (VNet). This sample deploys a jumpbox virtual machine in the hub virtual network peered with the virtual network that hosts the private AKS cluster. There are several options for establishing network connectivity to the private cluster. Create a virtual machine in the same Azure Virtual Network (VNet) as the AKS cluster. Use a virtual machine in a separate network and set up Virtual network peering. See the section below for more information on this option. Use an Express Route or VPN connection. Creating a virtual machine in the same virtual network as the AKS cluster or in a peered virtual network is the easiest option. Express Route and VPNs add costs and require additional networking complexity. Virtual network peering requires you to plan your network CIDR ranges to ensure there are no overlapping ranges. For more information, see https://docs.microsoft.com/en-us/azure/aks/private-clusters. For more information on Azure Private Links, see https://docs.microsoft.com/en-us/azure/private-link/private-link-overview In addition, the sample creates a private endpoint to access all the managed services deployed by the Terraform modules via a private IP address: Azure Container Registry Azure Storage Account Azure Key Vault NOTE If you want to deploy a https://docs.microsoft.com/en-us/azure/aks/private-clusters#create-a-private-aks-cluster-with-a-public-dns-address to simplify the DNS resolution of the API Server to the private IP address of the private endpoint, you can use this project under my https://github.com/paolosalvatori/private-cluster-with-public-dns-zone account or on https://github.com/Azure/azure-quickstart-templates/tree/master/demos/private-aks-cluster-with-public-dns-zone. Architecture The following picture shows the high-level architecture created by the Terraform modules included in this sample: The following picture provides a more detailed view of the infrastructure on Azure. The architecture is composed of the following elements: A hub virtual network with three subnets: AzureBastionSubnet used by Azure Bastion AzureFirewallSubnet used by Azure Firewall A new virtual network with three subnets: SystemSubnet used by the AKS system node pool UserSubnet used by the AKS user node pool VmSubnet used by the jumpbox virtual machine and private endpoints The private AKS cluster uses a user-defined managed identity to create additional resources like load balancers and managed disks in Azure. The private AKS cluster is composed of a: System node pool hosting only critical system pods and services. The worker nodes have node taint which prevents application pods from beings scheduled on this node pool. User node pool hosting user workloads and artifacts. An Azure Firewall used to control the egress traffic from the private AKS cluster. For more information on how to lock down your private AKS cluster and filter outbound traffic, see: https://docs.microsoft.com/en-us/azure/aks/limit-egress-traffic https://docs.microsoft.com/en-us/azure/firewall/protect-azure-kubernetes-service An AKS cluster with a private endpoint to the API server hosted by an AKS-managed Azure subscription. The cluster can communicate with the API server exposed via a Private Link Service using a private endpoint. An Azure Bastion resource that provides secure and seamless SSH connectivity to the Vm virtual machine directly in the Azure portal over SSL An Azure Container Registry (ACR) to build, store, and manage container images and artifacts in a private registry for all types of container deployments. When the ACR SKU is equal to Premium, a Private Endpoint is created to allow the private AKS cluster to access ACR via a private IP address. For more information, see https://docs.microsoft.com/en-us/azure/container-registry/container-registry-private-link. A jumpbox virtual machine used to manage the Azure Kubernetes Service cluster A Private DNS Zone for the name resolution of each private endpoint. A Virtual Network Link between each Private DNS Zone and both the hub and spoke virtual networks A Log Analytics workspace to collect the diagnostics logs and metrics of both the AKS cluster and Vm virtual machine.YazidMissaouiJan 09, 2024Copper Contributor3.4KViews2likes2CommentsLogging best practices consideration
Best practices for an optimal Log Analytics workspace design: Use as few Log Analytics workspaces as possible, consolidate as much as you can into a “central” workspace Avoid bandwidth costs by creating “regional” workspaces so that the sending Azure resource is in the same Azure region as your workspace Explore Log Analytics RBAC options like “resource centric” and “table level” RBAC before creating a workspace based on your RBAC requirements Consider “Table Level” retention when you need different retention settings for different types of data Use ARM templates to deploy your Virtual Machines, including the deployment and configuration of the Log Analytics VM extension. Ensure alignment with Azure Policy assignments to avoid conflicts Use Azure Policy to enforce compliance for installing and configuring Log Analytics VM extension. Ensure alignment with your DevOps team if using ARM templates Avoid multi-homing, it can have undesired outcomes. Strive to resolve by applying proper RBAC Be selective in installing Azure monitoring solutions to control ingestion costs Choosing the right technical design versus the right licensing model In the on-premises world, a technical design would dominantly be CapEx driven. In a pay-as-you-go model - the Azure model - it is primarily OpEx driven. OpEx will more likely drive a Log Analytics workspace design based on the projection of costs; related to data sent and ingested. This is a valid concern but if wrongly addressed, it can have a negative OpEx outcome based on operational complexities when using the data in Azure Security Center or Azure Sentinel. The additional OpEx costs, caused by operational complexities, are often hidden and are less clear as your monthly bill. This document aims to address those complexities to help you make the right design choice. Please refer https://learn.microsoft.com/en-us/azure/azure-monitor/logs/workspace-design for additional recommended reading. Use as few Log Analytics workspaces as possible Recommendation: Use one or more central (regional) workspace(s) Having a single workspace is technically the best choice to make, it provides you the following benefits: All data resides in one place Efficient, fast and easy correlation of your data Full support of creating analytics rules for Azure Sentinel RBAC and delegation model to design Simplified dashboard authoring, using Azure Workbooks, avoiding cross-workspace queries Easier manageability and deployment of the Azure Monitor VM extension, you know which resource is sending data to what workspace Prevents autonomous workspace sprawl Clear licensing model, versus a mix of free and paid workspaces Getting insights in costs and consumption is easier with one workspace Having one single workspace also has the following (current) disadvantages: Licensing model - this can be a disadvantage if you do not care about long term storing your data for specific data types* All data share the same retention settings* Configuring fine grained RBAC requires more effort If data is sent from Virtual Machines outside the Azure data center region where your workspace resides, it will incur costs Charge back is harder, versus every business unit having their own workspace Avoid bandwidth costs by creating “Regional” workspaces Recommendation: Locate your workspace in the same region as your Azure resources If your Azure resources are deployed outside your Azure workspace region, additional bandwidth costs will negatively affect your monthly Azure bill. All inbound (ingress) data transfers to Azure data centers from, for example, on-premises resources or other clouds, are free. However, Azure outbound (egress) data transfers from one Azure region to another Azure region, incur charges. At the writing of this , there are currently 54 regions in 140 countries available. Please check this link for the latest updates. All outbound traffic between regions is being charged, pricing information can be found here. All inbound traffic is free of charge. For example sending data from Europe West and Europe North will be charged.Chandrasekhar_AryaJul 03, 2023Iron Contributor2.5KViews2likes2CommentsAWESOME Azure Architecture
Do you want to know some great community and official Microsoft resources related to Azure architecture? Check out the Awesome Azure Architecture list; this list is an ongoing investment by myself and community members to help people find relevant content to help empower their Azure journey. A curated list of AWESOME blogs, videos, tutorials, code, tools & scripts related to the design and implementation of solutions in Microsoft Azure. This list contains anything that can help with your Microsoft Azure architecture and quickly get you up and running when designing, planning, and implementing services that empower organisations around the planet to achieve more. This list can be found at https://aka.ms/AwesomeAzureArchitecture. If you want to be a part of a community of like-minded learners, feel free to look at the Microsoft Learning Rooms. The Learning Room is Microsoft Teams based, allowing you to chat with Learning Experts and other learners to test your theories and share your learnings. I host a learning room called: Azure Cloud Commanders! Join in and start some of the discussions!lukemurraynzJun 14, 2023Learn Expert1.5KViews2likes2CommentsAzure Architecture Well-Architected framework
Learn how to plan for an outage and anticipate failure in your workload! https://techcommunity.microsoft.com/t5/azure-architecture-blog/planning-for-an-outage-here-is-how-to-anticipate-failure-in-your/ba-p/3295770 What do you do to keep your services going 24x7? Do you use the Microsoft Well-Architected framework as a baseline?EricStarkerApr 27, 2022Gold Contributor873Views2likes0CommentsLogic Apps and VNET access without ISE ?
Hello, So the Azure Integrated Service Environment (ISE) is an awesome thing, but not cheap. With the ultimate goal of using Logic Apps to fetch (and push) data from on-prem data sources via ExpressRoute, is there some way (a workaround - perhaps with Function Apps or an APIM?) that doesn't require ISE to do this? I'd rather not fall back to using Data Gateways or a Relay... Regards, J. Kahl,JackK1870Jul 25, 2020Copper Contributor6.9KViews2likes5Comments
Resources
Tags
- azure11 Topics
- Architecture4 Topics
- Site Recovery2 Topics
- application gateway2 Topics
- best practices1 Topic
- nsg1 Topic
- security1 Topic
- routing1 Topic
- Azure Remote Connection1 Topic
- app-gateway1 Topic