Forum Widgets
Latest Discussions
azure board How get the work item parent work item id
I am using work item details api _apis/wit/workItems/{workItemId}?api-version=7.1 my requirement is to get the parent id of the work item as per my investigation it says we have to check the relation [] but in response no suck kind of field. i have verified that link is correctly set through UI and showing properlysurendra-pDec 06, 2024Copper Contributor1View0likes0CommentsHow to sync Text, Date, and Dropdown Custom fields between Jira and Azure DevOps
Jira Azure DevOps integration is the need of the day. Both platforms offer robust features but when it comes to integrating data like custom fields, things can get tricky. This is where tools like third-party integration tools can help, ensuring your custom data flows effortlessly between Jira and Azure DevOps. In this blog, we'll dive into a common use case: syncing custom fields between Jira and Azure DevOps. We'll explore the key custom fields—Text, Dropdown, and Date—and provide code snippets to illustrate how third-party apps can make this process straightforward. Why sync custom fields? Custom fields allow teams to capture unique data that goes beyond standard fields like summary or status. By syncing custom fields, you ensure that all teams, regardless of the platform they use, have access to the same critical information. This leads to better collaboration, reduced errors, and a unified workflow. Setting up the environment for syncing custom fields between Jira and Azure DevOps We will use a tool called Exalate for this article. Start by installing Exalate on Jira and Azure DevOps from their respective marketplaces and connect them using the Script mode. The Script mode has incoming and outgoing sync rules written using low-code Groovy scripts. You can edit them at both integrating ends to meet your sync requirements. To send custom field data from a Jira issue field to its corresponding Azure DevOps work item field you’d need to modify the outgoing sync rules of your Jira instance. You would also need to receive the data coming from Jira in some work item field. For this, you’d need to modify the incoming sync rules of your Azure DevOps instance. To do this the other way around, simply reverse the codes in the incoming and outgoing sync scripts. That's all! Set an automatic trigger as the next step and see your data synced smoothly between the systems. Custom fields sync: a deeper look 1. Jira to Azure DevOps text field sync Text fields are commonly used to capture detailed information. Whether it's a technical description or customer feedback, syncing text fields ensures that all teams have a consistent narrative. //Jira outgoing sync replica.customFields."CF Name" = issue.customFields."CF Name" //Azure DevOps incoming sync workItem.customFields."CF Name".value = replica.customFields."CF Name".value Here, we simply map the text field from Jira to Azure DevOps and vice versa. Exalate takes care of the data transformation and transmission. 2. Jira to Azure DevOps Dropdown (select List) field sync Dropdown or select list fields are perfect for categorizing information, like selecting a priority level or feature type. Syncing these ensures that the categorizations are consistent across platforms. //Jira outgoing sync replica.customFields."CF Name" = issue.customFields."CF Name" //Azure DevOps incoming sync workItem.customFields."CF Name".value = replica.customFields."CF Name".value.value The code snippet captures the dropdown object with its values in one platform and assigns it to the corresponding field in the other. 3. Jira to Azure DevOps Date field sync Date fields are essential for tracking deadlines, milestones, and other time-sensitive data. Keeping dates in sync prevents miscommunications and scheduling conflicts. //Jira outgoing sync replica.customFields."CF Name" = issue.customFields."CF Name" //Azure DevOps incoming sync import java.text.SimpleDateFormat .... def sdf = new SimpleDateFormat("yyyy-MM-dd hh:mm:ss") def targetDate = sdf.format(new Date()) //throw new Exception("Target date = ${targetDate}") workItem."Microsoft.VSTS.Scheduling.StartDate" = targetDate This snippet ensures that the date values are synchronized, keeping project timelines aligned. Best practices for syncing custom fields between Jira and Azure DevOps Consistent field types: Ensure that the custom fields in Jira and Azure DevOps have compatible types. Field mapping: Carefully map the fields to avoid data loss or misinterpretation. Testing: Always test the sync setup in a sandbox environment before deploying it in production. Conclusion Synchronizing custom fields between Jira and Azure DevOps can enhance your project's efficiency and transparency. It ensures all teams have access to the same data, regardless of the platforms they use. Ready to sync more data between Jira and Azure DevOps? Drop a comment and we can discuss.teja1310Nov 27, 2024Copper Contributor33Views0likes0CommentsIssue with Speech-to-Text Integration in Azure Communication Services Using C#
Context:We are building a bot usingAzure Communication Services (ACS)andAzure Speech Servicesto handle phone calls. The bot asks questions (via TTS) and captures user responses usingspeech-to-text (STT). What We’ve Done: Created an ACS instance and acquired an active phone number. Set up an event subscription to handle callbacks for incoming calls. Integrated Azure Speech Services for STT in C#. Achievements: Successfully connected calls using ACS. Played TTS prompts generated from an Excel file. Challenges: User responses are not being captured. Despite settingInitialSilenceTimeoutto 10 seconds, the bot skips to the next question after 1–2 seconds without recognizing speech. The bot does not reprompt the user even when no response is detected. Help Needed: How can we ensure accurate real-time speech-to-text capture during ACS telephony calls? Are there better configurations or alternate approaches for speech recognition in ACS? Additional Context: Following theofficial ACS C# sample. Using Azure Speech Services and ACS SDKs. Code Snippet (C#): // Recognize user speech async Task<string> RecognizeSpeechAsync(CallMedia callConnectionMedia, string callerId, ILogger logger) { // Configure recognition options var recognizeOptions = new CallMediaRecognizeSpeechOptions( targetParticipant: CommunicationIdentifier.FromRawId(callerId)) { InitialSilenceTimeout = TimeSpan.FromSeconds(10), // Wait up to 10 seconds for the user to start speaking EndSilenceTimeout = TimeSpan.FromSeconds(5), // Wait up to 5 seconds of silence before considering the response complete OperationContext = "SpeechRecognition" }; try { // Start speech recognition var result = await callConnectionMedia.StartRecognizingAsync(recognizeOptions); // Handle recognition success if (result is Response<StartRecognizingCallMediaResult>) { logger.LogInformation($"Result: {result}"); logger.LogInformation("Recognition started successfully."); // Simulate capturing response (replace with actual recognition logic) return "User response captured"; // Replace with actual response text from recognition } logger.LogWarning("Recognition failed or timed out."); return string.Empty; // Return empty if recognition fails } catch (Exception ex) { logger.LogError($"Error during speech recognition: {ex.Message}"); return string.Empty; } }24Views0likes0CommentsAzure VMWare (AVS) Cost Optimization Using Azure Migrate Tool
What is AVS? Azure VMware Solution provides private clouds that contain VMware vSphere clusters built from dedicated bare-metal Azure infrastructure. Azure VMware Solution is available in Azure Commercial and Azure Government. The minimum initial deployment is three hosts, with the option to add more hosts, up to a maximum of 16 hosts per cluster. All provisioned private clouds have VMware vCenter Server, VMware vSAN, VMware vSphere, and VMware NSX. As a result, you can migrate workloads from your on-premises environments, deploy new virtual machines (VMs), and consume Azure services from your private clouds. Learn More:https://learn.microsoft.com/en-us/azure/azure-vmware/introduction What is Azure Migrate Tool? Azure Migrate is a comprehensive service designed to help you plan and execute your migration to Azure. It provides a unified platform to discover, assess, and migrate your on-premises resources, including servers, databases, web apps, and virtual desktops, to Azure. The tool offers features like dependency analysis, cost estimation, and readiness assessments to ensure a smooth and efficient migration process. Learn More: https://learn.microsoft.com/en-us/azure/migrate/migrate-services-overview How Azure Migrate can be used to Discover and Assess AVS? Azure Migrate enables the discovery and assessment of Azure VMware Solution (AVS) environments by collecting inventory and performance data from on-premises VMware environments, either through direct integration with vCenter (via Appliance) or by importing data from tools like RVTools. Using Azure Migrate, organizations can analyze the compatibility of their VMware workloads for migration to AVS, assess costs, and evaluate performance requirements. The process involves creating an Azure Migrate project, discovering VMware VMs, and generating assessments that provide insights into resource utilization, right-sizing recommendations, and estimated costs in AVS. This streamlined approach helps plan and execute migrations effectively while ensuring workloads are optimized for the target AVS environment. Note: We will be narrating the RVtools Import method in this article. What Is RVTools? RVTools is a lightweight, free utility designed for VMware administrators to collect, analyze, and export detailed inventory and performance data from VMware vSphere environments. Developed by Rob de Veij, RVTools connects to vCenter or ESXi hosts using VMware's vSphere Management SDK to retrieve comprehensive information about the virtual infrastructure. Key Features of RVTools: Inventory Management: Provides detailed information about virtual machines (VMs), hosts, clusters, datastores, networks, and snapshots. Includes details like VM names, operating systems, IP addresses, resource allocations (CPU, memory, storage), and more. Performance Insights: Offers visibility into resource utilization, including CPU and memory usage, disk space, and VM states (e.g., powered on/off). Snapshot Analysis: Identifies unused or orphaned snapshots, helping to optimize storage and reduce overhead. Export to Excel: Allows users to export all collected data into an Excel spreadsheet (.xlsx) for analysis, reporting, and integration with tools like Azure Migrate. Health Checks: Identifies configuration issues, such as disconnected hosts, orphaned VMs, or outdated VMware Tools versions. User-Friendly Interface: Displays information in tabular form across multiple tabs, making it easy to navigate and analyze specific components of the VMware environment. Hand-on LAB Disclaimer: The data used for this LAB has no relationship with real world scenarios. This sample data is self-created by the author and purely for understanding the concept. To discover and assess your Azure VMware Solution (AVS) environment using anRVTools extract report in the Azure Migrate tool, follow these steps: Prerequisites RVTools Setup: Download and install RVTools from the Official Website Ensure connectivity to your vCenter server. Extract the data by running RVTools and saving the output as an Excel (.xlsx) file Permissions: You need at least the Contributor role on the Azure Migrate project. Ensure that you have appropriate permissions in your vCenter environment to collect inventory and performance data. File Requirements: The RVTools file must be saved in .xlsx format without renaming or modifying the tabs or column headers. Note: Sample Sheet: Please check the attachment included with this article. Note that this is not the complete format; some tabs and columns have been removed for simplicity. During the actual discovery and assessment process, please do not modify the tabs or columns. Procedure Step 1: Export Data from RVTools Follow the steps provided in official website to get RVTools Extract Sample Sheet: Please check the attachment included with this article. Note that this is not the complete format; some tabs and columns have been removed for simplicity. During the actual discovery and assessment process, please do not modify the tabs or columns. Step 2: Discover Log in to the Azure portal. Navigate to Azure Migrate and select your project or create new project. UnderMigration goals, selectServers, databases and web apps. OnAzure Migrate | Servers, databases and web appspage, underAssessment tools, selectDiscoverand then selectUsing import. InDiscoverpage, inFile type, selectVMware inventory (RVTools XLSX). In theStep 1: Import the filesection, select the RVTools XLSX file and then selectImport. Wait for some time to Import Once import completed check for Error Messages if any and rectify those and re upload, otherwise wait 10-15 minutes to reflect imported VMs in the discovery. Post discovery Reference Link: https://learn.microsoft.com/en-us/azure/migrate/vmware/tutorial-import-vmware-using-rvtools-xlsx?context=%2Fazure%2Fmigrate%2Fcontext%2Fvmware-context Step 3: Assess After the upload is complete, navigate to the Servers tab. Click on Assess -->Azure VMware Solution to assess the discovered machines. Edit assessment settings based on your requirements and Save Target region: Select the Azure region for the migration. Node Type:Specify the Azure VMware Solution series (e.g., AV36, AV36P). Pricing model: Select pay-as-you-go or reserved instance pricing. Discount: Specify any available discounts. Note: We will be explaining all the parameters in optimize session. As of now just review and leave parameters as it is. InAssess Servers, selectNext. InSelect servers to assess>Assessment name> specify a name for the assessment. InSelect or create a group> selectCreate New and specify a group name. Select the appliance and select the servers you want to add to the group. Then select Next. InReview + create assessment, review the assessment details, and selectCreate Assessment to create the group and run the assessment. Step 4: Review the Assessment View an assessment InWindows, Linux and SQL Server>Azure Migrate: Discovery and assessment, select the number next toAzure VMware Solution. InAssessments, select an assessment to open it. As an example (estimations and costs, for example, only): Review the assessment summary. You can selectSizing assumptions to understand the assumptions that went in node sizing and resource utilization calculations. You can also edit the assessment properties or recalculate the assessment. Step 5: Optimize We have received a report without any optimization in our previous steps. Now we can follow below steps to optimize the cost and node count even further High level steps: Find limiting factor Find which component in settings are mapped for optimization depending on limiting factor Try to adjust the mapped component according to Scenario and Comfort Find Limiting factor: First understand which component (CPU, memory and storage) is deciding your ESXI Node count. This will be highlighted in the report The limiting factor shown in assessments could be CPU or memory or storage resources based on the utilization on nodes. It is the resource, which is limiting or determining the number of hosts/nodes required to accommodate the resources. For example, in an assessment if it was found that after migrating 8 VMware VMs to Azure VMware Solution, 50% of CPU resources will be utilized, 14% of memory is utilized and 18% of storage will be utilized on the 3 Av36 nodes and thus CPU is the limiting factor. Find which option in the setting can be used to optimize: This is depending on the limiting factor. For eg: If Limiting factor is CPU, which means you have high CPU requirement and CPU oversubscription can be used to optimize ESXI Node. Likewise, if storage is the limiting factor editing FTT, RAID or introducing External storage like ANF will help you to reduce Node count. Even reducing one node count will create a huge impact in dollar value. Let's understand how over commitment or over subscription works with simple example. Let's suppose I have two VMs with below specification Name CPU Memory Storage VM1 9 vCPU 200 GB 500 GB VM2 4 vCPU 200 GB 500 GB Total 13 vCPU 400 GB 1000 GB We have EXSI Node which has below capacity: vCPU 10 Memory 500 GB storage 1024 GB Now without optimization I need two ESXI node to accommodate 13 vCPU of total requirement. But let's suppose VM1 and VM2 doesn't consume entire capacity all the time. The total capacity usage at a time will not go beyond 10. then I can accommodate both VM in same ESXI node, Hence I can reduce my node count and cost. Which means it is possible to share resources among both VMs. Without optimization With optimization Parameters effecting Sizing and Pricing CPU Oversubscription Specifies the ratio of number of virtual cores tied to one physical core in the Azure VMware Solution node. The default value in the calculations is 4 vCPU:1 physical core in Azure VMware Solution. API users can set this value as an integer. Note that vCPU Oversubscription > 4:1 may impact workloads depending on their CPU usage. Memory overcommit factor Specifies the ratio of memory overcommit on the cluster. A value of 1 represents 100% memory use, 0.5, for example is 50%, and 2 would be using 200% of available memory. You can only add values from 0.5 to 10 up to one decimal place. Deduplication and compression factor Specifies the anticipated deduplication and compression factor for your workloads. Actual value can be obtained from on-premises vSAN or storage configurations. These vary by workload. A value of 3 would mean 3x so for 300GB disk only 100GB storage would be used. A value of 1 would mean no deduplication or compression. You can only add values from 1 to 10 up to one decimal place. FTT : How many device failure can be tolerated for a VM RAID : RAID stands for Redundant Arrays of Independent Disks Explains how data should be stored for redundancy Mirroring : Data will be duplicated as it is to another disk E.g.: To protect a 100 GB VM object by using RAID-1 (Mirroring) with an FTT of 1, you consume 200 GB. Erasure Coding : Erasure coding divides data into chunks and calculates parity information (redundant data) across multiple storage devices. This allows data reconstruction even if some chunks are lost, similar to RAID, but typically more space-efficient E.g.: to protect a 100 GB VM object by using RAID-5 (Erasure Coding) with an FTT of 1, you consume 133.33 GB. Comfort Factor: Azure Migrate considers a buffer (comfort factor) during assessment. This buffer is applied on top of server utilization data for VMs (CPU, memory and disk). The comfort factor accounts for issues such as seasonal usage, short performance history, and likely increases in future usage. For example, a 10-core VM with 20% utilization normally results in a 2-core VM. However, with a comfort factor of 2.0x, the result is a 4-core VM instead. AVS SKU Sizes Optimization Result In this example we got to know that CPU is my limiting factor hence I have adjusted CPU over subscription value from 4:1 to 8:1 Reduced node count from 6 (3 AV36P+3 AV64) to 5 AV36P Reduced Cost by 31% Note:Over-provisioning or over-committing can put your VMs at risk. However, in Azure Cloud, you can create alarms to warn you of unexpected demand increases and add new ESXi nodes on demand. This is the beauty of the cloud: if your resources are under-provisioned, you can scale up or down at any time. Running your resources in an optimized environment not only saves your budget but also allows you to allocate funds for more innovative ideas.Aaida_AboobakkarNov 22, 2024Microsoft252Views0likes0CommentsAzure MVP Extended Benefit
I have redeemed my Azure MVP extended benefit, and the subscription was processed using my credit card. I received the confirmation email, but the credits have not been reflected in my account yet. I also double-checked the Azure Extended Benefit canvas, which shows a message indicating that the benefits have already been redeemed.Salamat_ShahNov 17, 2024Brass Contributor17Views0likes0CommentsAnnouncing the winners of the October Innovation Challenge Hackathon
Congratulations and felicidades to everyone who worked on a project in our recent Innovation Challenge hackathon! And especially to the teams that were awarded by the judges! The Innovation Challenge program is designed to grow the number of developers from groups who are underrepresented in technology and to provide an opportunity to showcase their abilities building AI solutions on Azure. In order to qualify for an invitation to the hackathon, developers got support and training from the organizations sponsored by the program (BITE-CON, Blacks in Technology, Código Facilito, GenSpark, TechBridge, Women in Cloud) and they had to get a Microsoft Applied Skills credential or one of these Azure certifications: Azure AI Engineer Associate,Azure Developer Associate,Azure Data Science Associate. After an intense period of skilling and preparation, developers come together in teams to solve for AI use cases put forward by Azure customers. The winning projects worked on these real world enterprise challenges: Data Sensitivity: How can you design a system to automatically identify sensitive information including PII (e.g., names, social security numbers) and PHI (e.g., medical records) within the data uploaded by the user or provided in prompts? Token use and cost monitoring: By enabling scenarios such as creating SQL queries from natural language prompts, companies are now able to empower entire business units in ways that would have required dedicated specialists in the past. How can businesses easily provide a way for teams to control their costs and not have to deal with unexpectedly large expenses? Hacking the analog knowledge base: Computer vision and document intelligence solutions have dramatically increased our ability to access and work with data captured in printed documents. But there is still room for innovation in this space. How can we accelerate and improve on these trends? Overall we were impressed by the number of really strong projects and the quality of the presentations. We’re sure that every team that submitted a project will be doing epic stuff in the near future! But, here are the projects awarded by the judges First place SafeDocs AI designed to clean sensitive personal ensures that sensitive information is protected while maintaining document usability for tasks like analysis, AI training, or reporting. Second place DataVerse Interactive & Efficient Chat empowers HR teams without deep technical expertise to gain direct access to valuable insights, simplifying data-driven decision-making through an engaging and efficient chat interface. AegisScan serves to scan and detect in real-time PII and PHI to provide the best recommendation across private and federal sectors. Third place: SafeHealthAI an all-encompassing tool that offers risk prediction, key metric visualization, and sensitive information protection, empowering healthcare professionals with a secure and reliable solution TUSK: Trusted Utility for Statutory Knowledge ensures that legal documentation strictly adheres to all applicable laws, rules, and regulations. EcoHealth combines real-time air quality data with your health profile to provide personalized recommendations and protect your health, especially if you have specific conditions We'll be kicking off the next Innovation Challenge hackathon in December! Looking forward to getting inspired by what this community can do!macaldeNov 14, 2024Microsoft716Views3likes0Commentspl-300 exam
I recently cleared my PL-300 certification within 1 week of preparation on my first attempt with a score of 930. I hope my experience can help you in your preparation journey. I took an online course and watched several YouTube videos related to this certification. I also reviewed various FAQs to gain a deeper understanding. The most significant help in my preparation was practicing exam questions from ITExamsPro. I highly recommend them. These tests contain verified answers that help you understand the concepts in depth. they are very similar to the actual exam. I found that around 80% of the questions appeared in my real exam. These Pl-300 questions come with detailed explanations for each question, which was invaluable for my preparation. They also offered exam notes which highlights important topics which is also quite helpful.jimmytheSerNov 09, 2024Copper Contributor27Views0likes0CommentsBlog about Azure Language Studio A Gateway to Powerful Language AI
Azure Language Studio is Microsoft’s cutting-edge platform that allows developers, businesses, and enthusiasts to harness the power of advanced language models. Designed to make natural language processing (NLP) more accessible, Azure Language Studio provides robust tools and integrations that cater to a range of tasks from sentiment analysis and translation to more nuanced language understanding and summarization. Full blog:https://dellenny.com/azure-language-studio-a-gateway-to-powerful-language-ai/20Views0likes0CommentsFormer Employer Abuse
My former employer, Albert Williams, president of American Security Force Inc., keeps adding my outlook accounts, computers and mobile devices to the company's azure cloud even though I left the company more than a year ago. What can I do to remove myself from his grip? Does Microsoft have a solution against abusive employers?varle-vs-asfNov 06, 2024Occasional Reader28Views0likes0CommentsEnsuring Safe VM Deletion in VMSS: Process Completion Verification Before Scaling Down
Hi everyone, good morning! I'm working on setting up a logic flow in Power Automate that will delete VMs from my Virtual Machine Scale Set (VMSS) when we have a low number of items to process. I've already managed to create a logic that checks the number of items in my database and, based on that, I can successfully increase or decrease the capacity in my VMSS – this part is working well. The issue is... when reducing resources, I need to check if the VM I intend to delete is still running any flow (to avoid the risk of deleting it while it’s mid-process). I think I can determine this by looking at some table in Dataverse or something similar. Now, my goal is to delete certain VMs. Is it possible to ensure that the VM completes whatever it’s currently executing before deletion? This way, I could be sure that the VM has finished its task before proceeding with deletion. I know other automation platforms offer options like "Immediate STOP," which stops the VM immediately, and "Request STOP," which essentially means "finish what you're doing, then stop." This would be when I decrease the number of instances, right? Do you think I could achieve something like this via Power Automate or Azure?experi18Oct 28, 2024Copper Contributor58Views0likes0Comments
Resources
Tags
- azure2,201 Topics
- Azure DevOps1,383 Topics
- Data & Storage379 Topics
- Networking223 Topics
- Azure Friday219 Topics
- App Services193 Topics
- blockchain168 Topics
- devops145 Topics
- Security & Compliance137 Topics
- analytics128 Topics