Forum Widgets
Latest Discussions
SQL Server services set to Delayed Start - Why?
Reference:SQL Server services are set to Automatic (Delayed Start) start mode In SQL Server 2022 (16.x), setting theStart Modefor a SQL Server service toAutomaticin Configuration Manager, will configure that service to start inAutomatic (Delayed Start)mode instead, even though theStart Modeshows asAutomatic. Why did Microsoft make this default standard starting SQL Server 2022? What are the pros of doing this? Should we manually set to Delayed Start in lower versions of SQL Servers that are set to automatic start mode?224Views0likes2Commentsazure file share resource group name is in lowercase instead of uppercase prefix 'mc'
I have several AKS clusters in Azure. When create the clusters with Terraform, I see a default resource group created in the format 'MC_<CLUSNER_NAME>-aks_<CLUSNER_NAME>_<CLUSTER_REGION>'. My setup is like this: created AKS cluster I have some Azure file shares which are loaded as PV in kubernetes I want to backup those file shares via Azure Recovery Service vault's backup policy and backup 'file share' item. I also created a Azure Recovery Services vault. While working with protected items (file shares) in vault where I try to backup file shares, I see that the source file shares' resource group is prefixed with lowercase 'mc_' instead of uppercase as stated in first line. This means: Instead of 'MC_<CLUSNER_NAME>-aks_<CLUSNER_NAME>_<CLUSTER_REGION>', it's shown as 'mc_<CLUSNER_NAME>-aks_<CLUSNER_NAME>_<CLUSTER_REGION>' for the file shares. Can you tell us why is this? The problem I am having is that, if I have some azure file shares as vault protected items and I try to import those resources in terraform state file, I get an error where terraform wants to re-create the backup items in vault as the resource group name does not match (due to 'mc' letter case). Azure appears to consider this parameter as case-sensitive during API calls (our suspicion), that's why even though the resource group is prefixed with "MC_" (uppercase) everywhere else (other Azure UIs and my terraform import's resource ID stated it in uppercase 'MC_' prefix, resource group UI and even Azure docs), but vault backup item for file share shows it prefixed with '"mc_" (lowercase) in the UI. Can you please explain why is this? Here's an example where I am trying to import Azure's data into terraform state file, but this should not cause any replacement. Please note down the 'source_storage_account_id' entry where it says is causing replacement, due to case issue. Case issues occur here ([a] 'mc' & 'MC'; [b]) 'Microsoft.Storage' & 'Microsoft.storage'): [a] .../resourceGroups/mc_CLUSTER_NAME-aks_CLUSTER_NAME_CLUSTER_REGION/... --> /resourceGroups/mc_CLUSTER_NAME-aks_CLUSTER_NAME_CLUSTER_REGION/... [b] ...providers/Microsoft.Storage/storageAccounts/STORAGE_ACCOUNT_NAME --> providers/Microsoft.storage/storageAccounts/STORAGE_ACCOUNT_NAME From terraform plan: # module.main.azurerm_backup_protected_file_share.my_fileshares["STORAGE_ACCOUNT_NAME_pvc-<FILE_SHARE_ID>"] must be replaced # (imported from "/subscriptions/SUBSCRIPTION_ID/resourceGroups/CLUSTER_NAME-projectX/providers/Microsoft.RecoveryServices/vaults/CLUSTER_NAME-vault/backupFabrics/Azure/protectionContainers/StorageContainer;storage;mc_CLUSTER_NAME-aks_CLUSTER_NAME_CLUSTER_REGION;STORAGE_ACCOUNT_NAME/protectedItems/AzureFileShare;FILE_SHARE_FRIENDLY_NAME") # Warning: this will destroy the imported resource -/+ resource "azurerm_backup_protected_file_share" "my_fileshares" { backup_policy_id = "/subscriptions/SUBSCRIPTION_ID/resourceGroups/CLUSTER_NAME-projectX/providers/Microsoft.RecoveryServices/vaults/CLUSTER_NAME-vault/backupPolicies/CLUSTER_NAME-daily-backup" ~ id = "/subscriptions/SUBSCRIPTION_ID/resourceGroups/CLUSTER_NAME-projectX/providers/Microsoft.RecoveryServices/vaults/CLUSTER_NAME-vault/backupFabrics/Azure/protectionContainers/StorageContainer;storage;mc_CLUSTER_NAME-aks_CLUSTER_NAME_CLUSTER_REGION;STORAGE_ACCOUNT_NAME/protectedItems/AzureFileShare;FILE_SHARE_FRIENDLY_NAME" -> (known after apply) recovery_vault_name = "CLUSTER_NAME-vault" resource_group_name = "CLUSTER_NAME-projectX" source_file_share_name = "pvc-<FILE_SHARE_ID>" ~ source_storage_account_id = "/subscriptions/SUBSCRIPTION_ID/resourceGroups/mc_CLUSTER_NAME-aks_CLUSTER_NAME_CLUSTER_REGION/providers/Microsoft.Storage/storageAccounts/STORAGE_ACCOUNT_NAME" -> "/subscriptions/SUBSCRIPTION_ID/resourceGroups/mc_CLUSTER_NAME-aks_CLUSTER_NAME_CLUSTER_REGION/providers/Microsoft.storage/storageAccounts/STORAGE_ACCOUNT_NAME" # forces replacement } You can verify this case issue from here: any resource group named in this format: 'MC_<CLUSNER_NAME>-aks_<CLUSNER_NAME>_<CLUSTER_REGION>'. Check resources under it, some of them have it in lowercase 'mc_' prefix, some have uppercase 'MC_' prefix. Why is this? For example: lowercase 'mc_' prefix for resource group: aks-agentpool nsg (Network security group), aks-agentpool routetable (Route table), kubernetes Load balancer, pvc- prefixed Disks. uppercase 'MC_' prefix for resource group: aks-defaultpool (Virtual machine scale set), random ID'd storate account (e.g. STORAGE_ACCOUNT_NAME), Managed Identity 3. Check file share backup item in a recovery services vault. The share is shown in lowercase 'mc_' prefix. Can you tell us why is this case shown differently? And are api calls for resource IDs case-sensitive?zayedmahmudDec 28, 2024Copper Contributor197Views0likes1CommentAnnouncing the winners of the December 2024 Innovation Challenge
The Innovation Challenge hackathon brings together developers from groups who are underrepresented in technology to solve for AI use cases from Azure customers. We’re proud to be supporting these organizations who helped to prepare the participants of our most recent hackathon: BITE-CON, Código Facilito, DIO, GenSpark, Microsoft Software and Systems Academy (MSSA), TechBridge, and Women in Cloud. In order to qualify for the hackathon, participants had to earn a Microsoft Applied Skills credential or one of these Azure certifications: Azure AI Engineer Associate,Azure Developer Associate,Azure Data Science Associate. Our goal for everyone who participates is to help them open up doors to new career opportunities by demonstrating highly in demand skills and the ability to work with a team to deliver a working proof of concept under a deadline. The winning projects worked to solve for a range of real world AI challenges. Observability for AI systems: ensure that systems operate effectively, ethically, and reliably by identifying issues like model drift, bias, performance degradation, and data quality problems. VoiceRAG: how do you implement retrieval-augmented generation (RAG), the prevailing pattern for combining language models with your own data, in a system that uses audio for input and output? Accessibility for state and local government websites: How can AI be used to ensure that both web content and documents that can be downloaded meet the Web Content Accessibility Guidelines (WCAG) international standard? Hallucination detection and context validation: How could output automatically cross-reference with a reliable knowledge base or API? How do you provide confidence scores and explanations for detected hallucinations? Role-Based content filtering for AI outputs: create an AI output moderation system that filters or adjusts generated content based on user roles and access levels to prevent misuse or exposure of restricted data. AI search innovation: our industry has only just begun to get started combining AI search with RAG. What can you build that demonstrates the possibilities for improving the ways we interact with information online? There were many very strong projects and the judges had to make some hard decisions. We’re sure that every team that submitted a project will be doing epic stuff in the near future! Here are the projects awarded by the judges First place $10,000 Azure Insight Lens: Model Monitoring and Observability a comprehensive AI model monitoring and observability solution, designed to enhance model performance and optimize efficiency Second $5,000 Edu Echo a voice-first education platform designed to help 4th, 5th, and 6th grade students excel in math and language arts AbleSphere an AI-powered educational support application that empowers students with disabilities by providing real-time, personalized assistance Third $2,500 FAITH : Framework for AI Integrity and Testing Hallucinations an Azure AI based web application used to find hallucinations and ensure integrity among various AI models and LLMs along with confidence scores, complete reasoning, detailed analytics and visualizations by comparing with external knowledge sources Content-o enables organizations, whether in the financial, health, or service sectors, to offer their employees, associates, and third parties a point of access to receive information aligned and adjusted to their roles AI Search for Agricultural Planning and Control an AI-powered assistant tailored to the Brazilian agricultural sector, adhering to local legislation We’ll have our next hackathon in March 2025! Looking forward to getting inspired by what this community can do!macaldeDec 27, 2024Microsoft233Views1like0CommentsHow to connect ADF to AWS RDS
We are trying to connect an AZURE Data factory (ADF) to an AWS RDS (Relational Database Service)instance that does not have public access. We have a working VPN up and running however, we cannot connect over the private network to the AWS IP address/RDS server name. Looking online there is not a good way to do this without creating an additional NAT instance VM. Is there a way to do this with a cloud based ADF? Or does it make more sense to just create a self hosted ADF service either in AWS or in Azure to move the data into our data warehouse? Is it possible that fabric would have a solution for this problem?nittinjainDec 27, 2024Copper Contributor259Views0likes1CommentCustom permission to enable diagnostic setting in Entra ID
Custom permissions doesnt works when tried to enable diagnostic settings, in Microsoft Entra ID portal. Error: "does not have authorisation to perform action 'microsoft.aadiam/diagnosticSettings/write' over scope '/providers/microsoft.aadiam/diagnostic Settings/resourcename" Selective permissions that I applied to user account. My approach is to use custom role specific permissions. Appreciate your help to knows the right permission required. Regards, RajkumarRajkumarRRDec 27, 2024Copper Contributor24Views0likes1CommentApplication Gateway WAFv2 Custom Rules disappeared.
Hello All, We have a AGW with WAFv2 running. A while back we were working on adding new custom rules, but after saving the new rule, all of our existing WAF custom rules were deleted. Checking with Azure support, we came to know that the delete operation also works as a PUT operation for updating and/or deleting details. But we couldn't get a clear picture on what caused our rules to be deleted instead of adding the new rule. We are still in the process of exploring options to understand what could have caused this anomaly. Have any of you faced any such scenario(s)? Any insights or suggestions are welcome and much appreciated.Anusha_617Dec 24, 2024Copper Contributor30Views0likes2CommentsHow to Sync Area and Iteration Paths Between Jira and Azure DevOps
Azure DevOps area and iteration paths do not have a direct replica on the Jira side. So to sync information between both systems, the area and iteration path data has to be mapped to a custom field in the Jira issue. For this to work, you need a customizableAI-powered integrationsolution likeExalate. This solution will help you generate the script for mapping paths and maintaining the relationships between the work item and the issue. What is an Area Path? An area path establishes a hierarchy for work items related to a specific project. It helps you group work items based on team, product, or feature. Organizations working on a product or feature can use area paths to establish a hierarchy between teams at every level of involvement. You can assign the same area path to multiple teams. What is an Iteration Path? An iteration path assigns work items at the project level based on time-related intervals. Teams can share them to keep track of ongoing projects, specifically for sprints, releases, and subreleases. When new work items are added to the Sprint backlog, they become accessible via the existing iteration path. You can add up to 300 iteration paths per team. Sync Area and Iteration Paths: Jira to Azure DevOps Use Case You can create a custom field in your Jira instance to reflect the data from the iteration and area paths. How does this help your organization? Syncing this data gives more context about the teams involved on the Azure DevOps side. It provides context about the timelines and stages of progress for the mapped projects and entities. Primary Requirements Obtaining the right information from the API on both sides. Writing or generating the correctsync rulesfor both the incoming and outgoing data. Creatingtriggersto update the custom fields on Jira automatically. Fetching the right string from the area or iteration path. How Exalate Handles Jira to Azure DevOps Syncs Exalate supports one-way andtwo-way integrationbetween Jira and Azure DevOps as well as with Zendesk, ServiceNow, Salesforce, GitHub, etc. Exalate also supports AI-poweredGroovy scriptingwith the help of a chatbot. Users can also create trigger-based integrations for real-time syncs and bulk operations. To use Exalate, first install it on both Jira and Azure DevOps. Since this use case requires scripting, you need to set up a connection in the Script Mode. To configure the sync, open Exalate in your Azure DevOps dashboard, go to the connection you want to edit and click on the “Edit connection” icon. You have two options: Outgoing sync (on the Azure DevOps side) refers to the data being sent over to Jira. Incoming sync (on the Jira side) refers to the data to be received from the work item on Azure DevOps. Outgoing Sync (Azure DevOps): Send Area and Iteration Path Details from Azure DevOps to Jira To send out the area and iteration paths from the Azure DevOps work item, use the code below: replica.areaPath = workItem.areaPath replica.iterationPath = workItem.iterationPath Thereplicaretrieves the values of the area and iteration paths from the work item and saves them as a string. On the remote side, you can store the area/iteration path in a custom field using a type string or select list. Incoming Sync (Jira): Set Area Path from Azure DevOps as a Custom Field in Jira Let’s start with the area path. The area path starts with the name of the project. For example, an Azure DevOps project called AzureProject handled by Exalate’s dev team could have an area path:AzureProject\\ExalateDev. To set the area path based on the value received from the remote side text field, use the code below: issue.customFields."Area Path".value = replica.areaPath The issue.customFields."area-path".value retrieves data from the work item and stores it in the designated custom field on Jira. Incoming Sync (Jira) Set Iteration Path from Azure DevOps as a Custom Field in Jira The iteration path shows the name of the project as well as the specific sprint. For example, an Azure DevOps project called AzureProject in the first sprint could have an area path:AzureProject//Sprint1 If you don’t set the value for the Area field in the Sync Rules, Exalate uses the default area that has the same name as the project. To set the iteration path based on the value received from the remote side text field, use the code below: issue.customFields."iPath".value = replica.iterationPath Theissue.customFields."iPath".valueretrieves data from the work item and stores it in the designated custom field on Jira. Congratulations! You have successfully mapped the area and iteration path to a Jira custom field. If you still have questions or want to see how Exalate is tailored to your specific use case, drop a comment below or reach out to our engineers.tejabhutadaDec 23, 2024Copper Contributor17Views0likes0CommentsPeering Virtual Network Access to OpenAI resources?
In tenant A, I have an existing OpenAI resource that is set to only be accessible from a virtual network, in which a VM rests in place. The VM can connect to the OpenAI resource successfully. Now to access other OpenAI resources from another tenant B, I have added a peering connection to connect the virtual network of the OpenAI resources from tenant B with the virtual network in tenant A. (the typical 10.0.0.0-10.1.0.0 example) But when trying to call the API endpoint in the VM using REST or JavaScript SDK, 403 error occurs: "Access denied due to Virtual Network/Firewall rules" The peering network is confirmed to be successfully through the network interface of the VM. Is linking to Azure OpenAI resources like this possible, or do we have to use a private endpoint?henry_coding101Dec 23, 2024Copper Contributor169Views0likes1CommentDiagrams of existing landing zone setup
Hello Is there anything that can help me draw out what i currently have under Azure?EFFFFFDec 23, 2024Copper Contributor119Views0likes1Comment
Resources
Tags
- azure2,209 Topics
- Azure DevOps1,384 Topics
- Data & Storage379 Topics
- Networking224 Topics
- Azure Friday220 Topics
- App Services195 Topics
- blockchain168 Topics
- devops146 Topics
- Security & Compliance137 Topics
- Analytics129 Topics