Forum Widgets
Latest Discussions
Getting Started with OpenAI Whisper on Azure
In March of 2024 OpenAI Whisper for Azure became generally available, you can read the announcement here. From the documentation, “The Whisper model is a speech to text model from OpenAI that you can use to transcribe(and translate) audio files. The model is trained on a large dataset of English audio and text. The model is optimized for transcribing audio files that contain speech in English. The model can also be used to transcribe audio files that contain speech in other languages. The output of the model is English text.” At this time, translation from 57 Languages is supported. I wanted to spend this time to cover a few topics to help you make sense of the two flavors that will be available to you, Azure AI Speech Service, and Azure OpenAI Service. To get started, there is a table in the documentation referenced above to give you the use cases for Whisper for Azure vs. Azure AI Speech Model. There is a matrix at the following link to provide a recommended supportability matrix to give you an idea when to choose each service. I will call out from the documentation that there are limitations to the Azure Open AI Whisper model; Whisper Model via Azure OpenAI Service might be best for: Quickly transcribing audio files one at a time Translate audio from other languages into English Provide a prompt to the model to guide the output Supported file formats: mp3, mp4, mpweg, mpga, m4a, wav, and webm Whisper Model via Azure AI Speech might be best for: Transcribing files larger than 25MB (up to 1GB). The file size limit for the Azure OpenAI Whisper model is 25 MB. Transcribing large batches of audio files Diarization to distinguish between the different speakers participating in the conversation. The Speech service provides information about which speaker was speaking a particular part of transcribed speech. The Whisper model via Azure OpenAI doesn't support diarization. Word-level timestamps Supported file formats: mp3, wav, and ogg Customization of the Whisper base model to improve accuracy for your scenario (coming soon) Getting started with a Python Sample for OpenAI Whisper on Azure We do have a sample doc for this, however as many data scientist know, packages move fast and change often. As of this writing, the following code sample works with the OpenAI package > 1. Also, the api_version is correct as of this writing, I will keep this blog updated with any necessary changes for future versions. I will collect some samples and publish them to a GitHub repository and link them here in the near future. In the meantime, read the prerequisites here and get started. You will need an Azure subscription, access to the Azure OpenAI Service in your subscription, and add a Whisper model deployment. I will not comment on the region availability as it is constantly expanding, but you can keep an eye on this page to keep up with region availability. Once you have your deployment created, you will need to copy the URL endpoint and one of the two Open AI Keys from Azure OpenAI under the resource management section of your Azure OpenAI resource. This code sample will read in an audio file from local disk. A variety of audio samples can be found here in the Azure Speech SDK github repo. The result you get from one of the samples will look something like this; Translation(text='As soon as the test is completed, which is displayed successfully by the status change, you will receive a VIN number for both models in your test. Click on the test name to show the page with the test details. This detail page lists all statements in your data set and shows the recognition results of the two models next to the transcription from the transmitted data set. For a simpler examination of the opposite, you can activate or deactivate various error types such as additions, deletions and substitutions.') Using Azure AI Services Speech The alternative option is to use Azure OpenAI Whisper model in the Azure AI Speech Service. The Azure AI Speech Service offers a lot of capability, captioning, audio content creation, transcription, as well as real-time speech to text and text to speech. If you have been using the Azure AI Speech Service, you likely have much of the code written to take advantage of the Azure OpenAI Whisper model. There is a migration guide to move you from REST API v3.1 to Version 3.2 which supports the Whisper model. You should provide multiple files per request or point to an Azure Blob Storage container with the audio files to transcribe. The batch transcription service can handle a large number of submitted transcriptions. The service transcribes the files concurrently, which reduces the turnaround time. If you are using this as part of a batch transcription process, you can find the documentation here. The most important note about making sure you are using the Whisper model is to make sure you set the model version to 3.2, but keep in mind region availability which is linked here. I hope this article has helped you determine which service is right for you. Keep in mind, all Azure AI services are fast moving, so keep an eye on the docs linked in this post as well as the constantly expanding Microsoft learn site.shepsheppardDec 16, 2025Former Employee9.3KViews3likes0CommentsAzure monthly newsletters are now on the Partner News blog!
Go to our Partner news blog and click the tag "Azure News" to catch up on all our past monthly Azure newsletters. You can click the follow button in the top right corner to receive notifications of when the next newsletter is released. This ensures you will never miss out on updates again! Come on in and join the conversation! -jillJillArmourMicrosoftDec 16, 2025Community Manager32Views0likes0CommentsThis Azure Cosmos DB discussion board will be migrating into the Azure partners board on December 12, 2025.
Hello Partners!! Please note this discussion board will be merged into our Azure Partners discussion board on Friday, December 12th, 2025. Please follow this new board and subscribe to the Azure Cosmos DB tag to get notified of new posts of this topic!😃JillArmourMicrosoftDec 12, 2025Community Manager38Views0likes0CommentsMigrate and Modernize Summit! Sept 23-24 2025!
Partners, mark your calendars! On September 23-24, Microsoft is hosting the Migrate and Modernize Summit, a digital event to help you and your customers accelerate cloud migrations to Azure and simplify modernization. This event is appropriate for both partner and customers. As a partner, you will hear important product announcements that will enable you to: Transform your migration and modernization practice Leverage AI to simplify and accelerate Migrations and Modernization to Azure, unlocking new projects and revenue. Drive new leads Inspire customers to engage in an assessment using your partner organization and services. Unlock Azure Accelerate funding Activate Microsoft investments / partner benefits that drive your partner services. (Specialized partners) Your customers will learn how to make their cloud migration and modernization a reality with Keynote Presentations from Microsoft leaders such as Scott Guthrie, Jeremy Winter, Amanda Silver and Cyril Belikoff Breakout Tracks tailored for decision-makers and practitioners to see migration and modernization agents in action, as well as local tracks to go deeper on specific topics Customer Stories including Sanofi, Thomson Reuters, and OneDigital Live Demos to bring to life the latest innovations around AI-assisted tooling and products. Read the full blog hereJillArmourMicrosoftSep 12, 2025Community Manager49Views0likes0CommentsICYMI: Upcoming Azure Accelerate Partner Webinar on September 25!
We're excited to take the next step in the launch of Azure Accelerate - showing our ecosystem how partners are already seeing success with it! Azure Accelerate is Microsoft’s unified offering that brings together Azure Migrate and Modernize with Azure Innovate, built on the foundations of Azure Essentials. Azure Accelerate gives partners access to trusted experts, significant investments, and comprehensive coverage for every stage of the cloud and AI journey. Learn more!JillArmourMicrosoftSep 12, 2025Community Manager40Views0likes0CommentsBuilding Resilient Data Systems with Microsoft Fabric
Introduction Ensuring continuous availability and data integrity is paramount for organizations. This article focuses exclusively on resiliency within Microsoft Fabric, covering high availability (HA), disaster recovery (DR), and data protection strategies. We will explore Microsoft Fabric's resiliency features, including Recovery Point Objective (RPO) and Recovery Time Objective (RTO), and outline mechanisms for recovering from failures in both pipeline and streaming scenarios. As of April 25, 2025, this information reflects the current capabilities of Microsoft Fabric. Because features evolve rapidly, consult the Microsoft Fabric roadmap for the latest updates. Service Resiliency in Microsoft Fabric Microsoft Fabric leverages Azure’s infrastructure to ensure continuous service availability during hardware or software failures. Availability Zones Fabric uses Azure Availability Zones—physically separate datacenters within an Azure region—to automatically replicate resources across zones. This enables seamless failover during a zone outage, without manual intervention. As of Q1 2025, Fabric provides partial support for zone redundancy in selected regions and services. Customers should refer to service-specific documentation for detailed HA guarantees. Cross‑Region Disaster Recovery For protection against regional failures, Microsoft Fabric offers partial support for cross-region disaster recovery. The level of support varies by service: OneLake Data: OneLake supports cross-region data replication in selected regions. Organizations can enable or disable this feature based on their business needs. For more information, see Disaster recovery and data protection for OneLake. Power BI: Power BI includes built-in DR capabilities, with automatic data replication across regions to ensure high availability. For frequently asked questions, review the Power BI high availability, failover, and disaster recovery FAQ. Data Resiliency: RPO and RTO Considerations Fabric offers configurable storage redundancy options—Locally Redundant Storage (LRS), Zone-Redundant Storage (ZRS), and Geo-Redundant Storage (GRS)—each with different RPO/RTO targets. Detailed definitions and SLAs are available in the Azure Storage redundancy documentation. Recovering from Failed Processes Failures can occur in both pipeline and streaming workloads. Microsoft Fabric provides tools and strategies for minimizing disruption. Data Pipelines In Data Factory within Fabric, pipelines are made up of activities that may fail due to source issues or transient network errors. Zone failures are typically handled like standard pipeline errors, while regional failures require manual intervention. See Microsoft Fabric disaster recovery experience specific guidance for a brief discussion. Pipeline resiliency can be improved by implementing retry policies, configuring error-handling blocks, and monitoring execution status using Fabric’s built-in logging features. Streaming Scenarios Spark Structured Streaming: Fabric leverages Apache Spark for real-time processing. Spark Structured Streaming includes built-in checkpointing, but seamless failover depends on cluster configuration. Manual intervention can be required to resume tasks after node or regional failures. Eventstream: Eventstream simplifies streaming data ingestion, but users should currently assume manual steps may be needed for fault recovery. Monitoring and Alerting Microsoft Fabric integrates with tools such as Azure Monitor and Microsoft Defender for Cloud, allowing administrators to track availability metrics and configure alerts. Regular monitoring helps detect anomalies early and ensures that resiliency strategies remain effective. Data Loss Prevention (DLP) As of March 2025, Microsoft Purview extends DLP policy enforcement to Fabric and Power BI workspaces. Organizations can define policies to automatically identify, monitor, and protect sensitive data across the Microsoft ecosystem. For more information, review Purview Data Loss Prevention. Cost Considerations Enhancing resiliency can increase costs. Key considerations include: Geo-Redundancy: While cross-region replication improves resiliency, it also increases storage and transfer costs. Assess which workloads require GRS based on criticality. Egress Charges: Transferring data across regions can generate egress fees. Co-locating compute and storage within the same region helps minimize these charges. Pipeline CU Consumption: Data movement and orchestration in Fabric consume Capacity Units (CUs). Regional data movement may take longer and result in higher CU usage. Understanding these costs helps optimize both performance and budget. For example, data movement between regions can take more time and therefore add additional cost. Enabling Disaster Recovery for Fabric Capacities Disaster recovery must be enabled per Fabric capacity. This can be configured through the Admin Portal. Make sure to enable DR for each capacity that requires protection. For setup details, learn how to Manage your Fabric capacity for DR. Conclusion Microsoft Fabric offers a robust set of features for building resilient data systems. By leveraging its high availability, disaster recovery, and monitoring capabilities—and aligning them with cost-aware planning—organizations can ensure operational continuity and safeguard critical data. For ongoing updates, monitor the Microsoft Fabric documentation and consider subscribing to the Fabric blog for the latest announcements.1.2KViews1like0CommentsRegister now for the Migrate to Innovate Summit
Join the summit on March 11, presented in partnership with Intel. Stay agile, innovate for the future, and maintain a competitive edge by accelerating your cloud migration and modernization journey. Microsoft thought leaders will discuss the latest news and trends, showcase real-world case studies, and share how Azure can help you fully embrace AI. Join us to: Maximize business value and build the foundation for successful innovation by leveraging the latest Azure and Intel capabilities for your workloads. Dive into case studies and real-world examples showcasing how organizations have successfully transformed their business and how you can be next by migrating and modernizing on Azure. Make sure your cloud migration and modernization journey is using the best practices and strategies featured in product demonstrations. Register now > Migrate to Innovate Summit Tuesday, March 11, 2025 9:00 AM–11:30 AM Pacific Time (UTC-7)MSdellisFeb 13, 2025Microsoft127Views2likes0CommentsAzure Virtual Machine: Centralized insights for smarter management
Introduction Managing Azure Virtual Machines (VMs) can be challenging without the right tools. There are several ways for monitoring, some of which extend beyond the platform's native capabilities. These may include options like installing an agent or utilizing third-party products, though they often require additional setup and may involve extra costs. This workbook is designed to use the native platform capabilities to give you a clear and detailed view of your VMs, helping you make informed decisions confidently without any additional cost. To get started, check out the GitHub repository. Why do you need this Workbook? When managing multiple VMs, understanding usage trends, comparing key metrics, and identifying areas for improvement can be time-consuming. The Azure Virtual Machine Insights Workbook simplifies this process by centralizing essential data into one place from multiple subscriptions and resource groups. It covers inventory to provide you with a clear overview of all your VM resources and platform metrics to help you monitor, analyze, compare, and optimize performance effectively. Scenarios to use this Workbook Here are a few examples of how this workbook can bring value: Management Centralized Inventory Management Easily view all your VMs in one place, ensuring a clear overview of your resources. Performance and Monitoring Performance monitoring Analyze metrics like CPU, memory, network, and disk usage to identify performance bottlenecks and maintain optimal application performance. Performance trends Examine long-term performance trends to understand how your VMs behave over time and identify areas for improvement. Comparing different VM types for the same workload Compare the performance of various VM types running the same workload to determine the best configuration for your needs. Virtual Machines behind a load balancer Monitor and compare the performance of VMs behind a load-balanced to ensure even distribution and optimal resource utilization. Virtual Machines farm Assess and compare the performance of VMs within a server farm to identify outliers and maintain operational efficiency. Cost Cost Optimization Detect and compare underutilized VMs or overprovisioned resources to reduce waste and save on costs. Analyse usage trends over time to determine if an hourly spend commitment through Azure savings plans is feasible. Understand the timeframes for automating the deallocation of non-production VMs, unless Azure Reservations cover them. Independent software vendors (ISVs) ISV managing VMs per customer Compare performance across all customer VMs to identify trends and ensure consistent service delivery for each customer. Trends and Planning Resource Planning Track usage trends over time to better predict future resource needs and ensure your VMs are prepared for business growth. Scalability Planning Utilize insights from trends and metrics to prepare for scaling your VMs during peak demand or business growth. Examples from the workbook Conclusion The Azure Virtual Machine Insights Workbook helps you manage your VMs by bringing key metrics and insights together in one place, using native Azure features at no extra cost. It lets you analyze performance, cut costs, and plan for future growth. Whether you are investigating performance issues, analyzing underused resources, or predicting future needs, this workbook helps you make smart decisions and manage your infrastructure more efficiently. For any queries or to contribute, feel free to connect via the GitHub repo or submit feedback!595Views0likes0CommentsAKS Edge Essentials: A Lightweight “Easy Button” for Linux Containers on Windows Hosts
[Note: This post was revised on November 26, 2024. The change was in the EFLOW section due to product direction changes.] Hello, Mike Bazarewsky writing again, now on our shiny new ISV blog! My topic today is on a product that hasn’t gotten a huge amount of press, but actually brings some really nice capabilities to the table, especially with respect to IoT scenarios as we look to the future with Azure IoT Operations. That product is AKS Edge Essentials, or AKS-EE for short. What did Microsoft have before AKS-EE? AKS-EE is intended to be the “easy button” for running Linux-based and/or Windows-based containers on a Windows host, including a Windows IoT Enterprise host. It’s been possible to run Docker-hosted containers on Windows for a long time, and it’s even been possible to run orchestrators including Kubernetes on Windows for some time now. There’s even formal documentation on how to do so in Microsoft Learn. Meanwhile, in parallel, and specific to IoT use cases, Microsoft offers Azure IoT Edge for Linux on Windows, or EFLOW for short. EFLOW offers the Azure IoT Edge container orchestrator on a Windows host by leveraging a Linux virtual machine. That virtual machine runs a customized deployment of CBL-Mariner, Microsoft’s first-party Linux distribution designed for secure, cloud-focused use cases. As an end-to-end Microsoft offering on a Microsoft platform, EFLOW is updated through Microsoft Update and as such, “plays nice” with the rest of the Windows ecosystem and bringing the benefits of that ecosystem while allowing running targeted Linux containers to run with a limited amount of “ceremony”. What does AKS-EE bring to the table? Taking this information all into account, it’s reasonable to ask “What are the gaps? Why would it make sense to bring another product into the space?” The answer is two-fold: For some ISVs, particularly those coming from traditional development models (e.g. IoT developers, web service developers), the move to “cloud native” technologies such as containers is a substantial shift on its own, before worrying about deployment and management of an orchestrator. However, an orchestrator is still something those ISVs need to be able to get to scalability and observability as they work through their journey of “modernization” around containers. EFLOW works very, very well for its intended target, which is Azure IoT Edge. However, that is a specialized use case that does not generalize well to general application workloads. There is a hidden point here as well. Windows containers are a popular option in many organizations, but Linux containers are more common. At the same time, many enterprises (and thus, ISV customers) prefer the management, hardware support, and long-term OS support paths that Windows offers. Although through technologies such as Windows container hosting, Windows Subsystem for Linux, and Hyper-V allow for running Linux containers on a Windows host, they have different levels of complexity and management overhead, and in some situations, they are not practical. The end result of all of this is that there is a need in the marketplace for a low-impact, easily-deployed, easily-updated container hosting solution for Linux containers on Windows hosts that supports orchestration. This is especially true as we look at a solution like Azure IoT Operations, which is the next-generation, Kubernetes-centric Azure IoT platform, but is also true for customers looking to move from the simplistic orchestration offered by the EFLOW offering to the more sophisticated orchestration offered by Kubernetes. Besides bringing that to the table, AKS-EE builds on top of the standard k3s or k8s implementations, which means that popular Kubernetes management tools such as k9s can be used. It can be Azure Arc enabled, allowing centralized management of the solution in the Azure Portal, Azure PowerShell, or Azure CLI. Azure Arc supports this through an outgoing connection from the cluster to the Azure infrastructure, which means it’s possible to remotely manage the environment, including deploying workloads, collecting telemetry and metrics, and so on, without needing incoming access to the host or the cluster. And, because it’s possible to manage Windows IoT Enterprise using Azure Arc, even the host can be connected to remotely, with centrally managed telemetry and updates (including AKS-EE through Microsoft Update). This means that it’s possible to have an end-to-end centrally managed solution across a fleet of deployment locations, and it means an ISV can offer “management as a service”. An IoT ISV can even offer packaged hardware offerings with Windows IoT Enterprise, AKS-EE, and their workload, all centrally managed through Azure Arc, which is an extremely compelling and powerful concept! What if I am an IoT Edge user using EFLOW today? As you might be able to determine from the way I’ve presented AKS-EE, one possible way to think about AKS-EE is as a direct replacement for EFLOW in IoT Edge scenarios. If you're looking at moving from EFLOW to a Kubernetes-based solution, AKS-EE is a great option to explore! Conclusion Hopefully, this short post gives you a better understanding of the “why” of AKS-EE as an offering and how it relates to some other offerings in the Microsoft space. If you’re looking to evaluate AKS-EE, the next step would be to review the Quickstart guide to get started! Looking forward, if you are interested in production AKS-EE architecture, FastTrack ISV and FastTrack for Azure (Mainstream) have worked with multiple AKS-EE customers at this point, from single host deployments to multi-host scale-out deployments, including leveraging both the Linux and the Windows node capabilities of AKS-EE and leveraging the preview GPU support in the product. Take a look at those sites to learn more about how we can help you with derisking your AKS-EE deployment, or help you decide if AKS-EE is in fact the right tool for you!2.1KViews3likes0CommentsHow to deploy a production-ready AKS cluster with Terraform verified module
Do you want to use Terraform to deploy an Azure Kubernetes Service (AKS) cluster that meets the production standards? We have a solution for you! We recently created a Terraform verified module for AKS that allows customers to deploy a production standard AKS cluster along with a Virtual Network and Azure container registry. It provisions an environment sufficient for most production deployments for AKS. The module is available on the Terraform registry and can be found here. You don't have to deal with the complexity of setting up an AKS cluster from the ground up. The module offers opinionated choices and reasonable default settings to deploy an AKS cluster ready for production. What are Azure Verified Modules? Azure Verified Modules enable and accelerate consistent solution development and delivery of cloud-native or migrated applications and their supporting infrastructure by codifying Microsoft guidance (WAF), with best practice configurations. For more information, please visit Azure Verified Modules. What does the module do? The module provisions the following resources: Azure Kubernetes Service (AKS) cluster for production workloads Virtual Network Azure Container Registry To view the full list of resources and their configurations, please visit the module page. How to use the module To use the module, you need to have Terraform installed on your machine. If you don't have Terraform installed, you can download it from their website here. Once you have Terraform installed, you can create a new Terraform configuration file and add the following code: module "avm-ptn-aks-production" { source = "Azure/avm-ptn-aks-production/azurerm" version = "0.1.0" location = <region> name = <cluster-name> resource_group_name = <rg-name> rbac_aad_admin_group_object_ids = ["11111111-2222-3333-4444-555555555555"] } To understand more about the variables and options available, have a look at the GitHub README. Running the module will provision the resources in your Azure subscription. You can view the resources in the Azure portal. How we built the module This module is very opinionated and forces the user into a design that is ready for production. From the experience of supporting users deploying AKS with Terraform with the module "Azure/aks/azurerm", we proposed a much simpler module to help customers deploy scalable and reliable clusters. Here some of the important opinionated choices we made. Create user zonal node pools in all Availability Zones When implementing availability zones with the cluster autoscaler, we recommend using a single node pool for each zone. The use of the "balance_similar_node_groups" parameter enables a balanced distribution of nodes across zones for your workloads during scale up operations. When this approach isn't implemented, scale down operations can disrupt the balance of nodes across zones. Leverage AKS automatic upgrades to keep the cluster secure and supported AKS has a fast release calendar. It is important to keep the cluster on a supported version, and to get security patches quickly. We enforce the "patch" automatic channel upgrade and the node image "node_os_channel_upgrade" to keep the cluster up to date. It is a user's responsibility to plan Kubernetes minor version upgrades. Use Azure CNI Overlay for optimal and simple IP address space management There are many options when it comes to AKS networking. In most customer scenarios, Azure CNI Overlay is the ideal solution. It is easy to plan IP address usage and it provides plenty of options to grow the cluster. Use Private Kubernetes API endpoint and Microsoft Entra authentication for enhanced security We use a layered security approach to protect your Kubernetes API from being hacked. We keep the Kubernetes API safe by putting it in a private network, and we allow Microsoft Entra identities to authenticate (optional: and we turn off local accounts). Bring your own network and force a User Assigned identity Customers scenarios often involve more than one single AKS cluster. The Azure VNet where these clusters exist should be part of a resource group controlled by the customer. Reusing the same User Assigned identity across a fleet of clusters, simplifies the role assignment operations. We wrote this module considering the integration in a real-world customer subscription, rather than considering the AKS cluster as a single isolated entity. Don't use any preview features To prevent breaking changes during production, we avoided the use of any preview features. Development of the module from a Terraform perspective The Azure Verified Module team worked to create effective pipelines for module development. For initial development you will need to fork an already prepared template and use that to develop your module. The template is available on GitHub here. This ensures that all module developers are following the same standards and best practices. It also makes it easier to review and approve modules for publication and make any updates to the templates. The pipeline has in built checks to ensure that the module is following the best practices and standards. It provides a Docker container with all the necessary tools to run the checks locally and as well on GitHub Actions. The pipeline runs the following checks: Checks linting standards that are set and best practices set by the AVM community. Validates that the Terraform code is valid using "terraform validate". Run checks to update the readme if any changes are detected so that you don’t have to manually update them. The e2e tests only need you to give examples of the module's functionality and set up a test environment using GitHub to start them. You can see the steps on how to do this here. For an end-to-end review of the contribution flow and how to setup your module for development using AVM scripts have a look at the Terraform Contribution Guide. Lessons learned The AVM team provides the initial module template and GitHub actions pipeline to develop a new module. Using those resources and attending their office hours meeting enabled us to move faster. When building a new Terraform module for Azure, following the procedure to implement an AVM module saves you a lot of time, ensuring quality and avoiding common mistakes. It adds a lot of value to join the AVM team community calls or look out for the changes mentioned in AVM GitHub repo, to get updates on the latest changes, and to ask any questions you may have. When writing the design document, before starting development, make sure you address all edge cases. For example, not all Azure regions have availability zones, and the module must work in all Azure regions. Dealing with the details before starting the implementation helps to find good solutions without having to make bigger changes in the implementation phase. How can you contribute back Have look at the AVM team blog for updates from the AVM team. Help build a module. Share your learnings with the AVM team. Join the regular external community calls. Conclusion If you face any challenges, please raise an issue on the repo - https://github.com/Azure/terraform-azurerm-avm-ptn-aks-production We would also like to thank Zijie He and Jingwei Wang for their huge contributions and collaboration whilst building this module.NellyKiboiNov 06, 2024Former Employee7.2KViews6likes0Comments
Resources
Tags
- azure cosmos db19 Topics
- azure for isv and startups17 Topics
- azure15 Topics
- Partner question13 Topics
- isv8 Topics
- linux7 Topics
- cloud native5 Topics
- machine learning3 Topics
- python3 Topics
- kubernetes3 Topics