Azure AI Language
17 TopicsExciting Update: Abstractive Summarization in Azure AI Language Now Powered by Phi-3.5-mini! 🎉
We’re thrilled to announce that the summarization capability in Azure AI Language service has started transitioning to industry widely accepted Small Language Model (abbr. SLM) and Large Language Model (abbr. LLM) models, starting with transitioning document and text abstractive summarization to Phi-3.5-mini! Why It Matters This upgrade marks a significant advancement in fully embracing rapidly evolving Gen AI technologies. It is aligned with our commitment to enabling customers to concentrate on their core business needs while delegating the complex tasks of model maintenance, finetuning and engineering to our services, and interact with our service with strong typed APIs. We remain committed to updating the base models and ensuring our customers always receive the best performance for summarization tasks, including GPT-4o models and OpenAI’s o3 models. We are confident that this transition will offer significant advantages, and we eagerly anticipate your feedback as we continue to enhance our services. What Does This Mean for You Enhanced Performance: A fine-tuned Phi-3.5-mini model boosted the production performance by 9%. Key highlights include enhanced understanding capability with improved common-sense reasoning, as well as smoother and more reliable summary generation. Resource Optimization: By leveraging the compact yet powerful Phi-3.5-mini, we ensure better utilization of resources while maintaining quality, and availability in container. More Enterprise and Compliance Features: lower hallucinations, stronger Responsible AI, enhanced scaling support, etc. Notably the new production model largely reduced hallucination by 78%. Stay tuned for more updates as we continue this transition to all summarization tasks, and bring additional enhancements to our services! How To Utilize This Advancement Using it in Cloud, you don’t need to do anything special to benefit from this advancement. Your document and text abstractive summarization requests will be served with the finetuned Phi-3.5 mini model. Using container, please download the latest version on the mcr.microsoft.com container registry syndicate. The fully qualified container image name is mcr.microsoft.com/azure-cognitive-services/textanalytics/summarization. Thank you for your continued trust in our products, and we welcome your feedback as we strive to continuously improve our services. For more details and resources, please explore the following links: - Learn more about our summarization solution in Documentation - Get started with the summarization container by visiting Documentation - Try it out with AI Foundry for a code-free experience - Explore Azure AI Language and its various capabilities in Documentation580Views1like0CommentsNested virtualization on Azure : a step-by-step guide
This article serves as a practical guide for developers and engineers to enable and configure nested virtualization on Azure. Nested virtualization allows running Hyper-V inside a virtual machine, providing enhanced flexibility and scalability for various development and data science applications. The guide walks through selecting the right Azure VM, setting up the environment, and installing Docker Desktop for efficient container management. It also addresses common troubleshooting tips to ensure a smooth setup. Whether you're working with complex machine learning models or developing applications, this guide will help you maximize the potential of nested virtualization on Azure.819Views0likes0CommentsEnter new era of enterprise communication with Microsoft Translator Pro & document image translation
Microsoft Translator Pro: standalone, native mobile experience We are thrilled to unveil the gated public preview of Microsoft Translator Pro, our robust solution designed for enterprises seeking to dismantle language barriers in the workplace. Available on iOS, Microsoft Translator Pro offers a standalone, native experience, enabling speech-to-speech translated conversations among coworkers, users, or clients within your enterprise ecosystem. Watch how Microsoft Translator Pro transforms a hotel check-in experience by breaking down language barriers. In this video, a hotel receptionist speaks in English, and the app translates and plays the message aloud in Chinese for the traveler. The traveler responds in Chinese, and the app translates and plays the message aloud in English for the receptionist. Key features of the public preview Our enterprise version of the app is packed with features tailored to meet the stringent demands of enterprises: Core feature - speech-to-speech translation: Break language barriers: Real-time speech-to-speech translation allows you to have seamless communication with individuals speaking different languages. Unified experience: View or hear both transcription and translation simultaneously on a single device, ensuring smooth and efficient conversations. On-device translation: Harness the app's speech-to-speech translation capability without an internet connection in limited languages, ensuring your productivity remains unhampered. Full administrator control: Enterprise IT Administrators wield extensive control over the app's deployment and usage within your organization. They can fine-tune settings to manage conversation history, audit, and diagnostic logs, with the ability to disable history or configure automatic exportation of the history to cloud storage. Uncompromised privacy and security: Microsoft Translator Pro provides enterprises with a high level of translation quality and robust security. We know that Privacy and security are top priorities for you. Once granted access by your organization's admin, you can sign in the app with your organizational credentials. Your conversational data remains strictly yours, safeguarded within your Azure tenant. Neither Microsoft nor any external entities have access to your data. Join the Preview To embark on this journey with us, please complete the gating form . Upon meeting the criteria, we will grant your organization access to the paid version of the Microsoft Translator Pro app, which is now available in the US. Learn more and get started: Microsoft Translator Pro documentation. Document translation translates text embedded in images Our commitment to advancing cross-language communication takes a major step forward with a new enhancement in Azure AI Translator’s Document Translation (DT) feature. Previously, Document Translation supported fully digital documents and scanned PDFs. Starting January 2025, with this latest update, the service can also process mixed-content documents, translating both digital text and text embedded within images. Sample document translated from English to Spanish: (Frames in order: Source document, translated output document (image not translated), translated output document with image translation) How It Works To enable this feature, the Document Translation service now leverages Microsoft Azure AI Vision API to detect, extract, and translate text from images within documents. This capability is especially useful for scenarios where documents contain a mix of digital text and image-based text, ensuring complete translations without manual intervention. Getting Started To take advantage of this feature, customers can use the new optional parameter when setting up a translation request: Request A new parameter under "options" called "translateTextWithinImage" has been introduced. This parameter is of type Boolean, accepting "true" or "false." The default value is "false," so you’ll need to set it to "true" to activate the image text translation capability. Response: When this feature is enabled, the response will include additional details for transparency on image processing: totalImageScansSucceeded: The count of successfully translated image scans. totalImageScansFailed: The count of image scans that encountered processing issues. Usage and cost For this feature, customers will need to use the Azure AI Services resource, as this new feature leverages Azure AI Vision services along with Azure AI Translator. The OCR service incurs additional charges based on usage. Pricing details for the OCR service can be found here: Pricing details Learn more and get started (starting January 2025): Translator Documentation These new advancements reflect our dedication to pushing boundaries in Document Translation, empowering enterprises to connect and collaborate more effectively, regardless of language. Stay tuned for more innovations as we continue to expand the reach and capabilities of Microsoft Azure AI Translator.Announcing an accelerator for GenAI-powered assistants using Azure AI Language and Azure OpenAI
We’re thrilled to introduce a new accelerator solution in GitHub Azure-Samples library designed specifically for creating and enhancing your GenAI-based conversational assistants with robust, human-controllable workflows. This accelerator uses key services from Azure AI Language in addition to Azure OpenAI, including PII detection to protect sensitive information, Conversational Language Understanding (CLU) to predict top users’ intents, Custom Question Answering (CQA) to respond to top questions with deterministic answers. Together with Azure OpenAI and Large Language Models (LLMs), the solution is designed to orchestrate and deliver a smooth, human-guided, controllable and deterministic conversational experience. The integration with LLMs will come soon. It’s perfect for developers and organizations looking to build assistants that can handle complex queries, route tasks, and provide reliable answers, all with a controlled, scalable architecture. Why This Accelerator While LLMs have been appreciated by many customers to build conversational assistants for natural, engaging, and context-aware interactions, there are challenges such as the significant efforts required in prompt engineering, document chunking, and reducing hallucinations to improve the quality of their Retrieval-Augmented Generation (RAG) solutions. If an AI quality issue is discovered in production, customers need to find an effective way to address it promptly. This solution aims to help customers utilize offerings in the Azure AI portfolio and address key challenges when building Generative AI (GenAI) assistants. Designed for flexibility and reliability, this accelerator enables human-controllable workflows that meet real-world customer needs. It minimizes the need for extensive prompt engineering by using a structured workflow to prioritize top questions with exact answers and custom intents that are critical to your business and use LLM to handle topics in a conversation that have lower priorities. This architecture not only enhances answer quality and control but also ensures that complex queries are handled efficiently. If you want to fix quickly an incorrect answer for your chatbot built with RAG, you can also attach this accelerator solution to your existing RAG solution and quickly add a QA pair with the correct response in CQA to fix the issue for your users. What This Accelerator Delivers This accelerator provides and demonstrates an end-to-end orchestration using a few capabilities in Azure AI Language and Azure OpenAI for conversational assistants. It can be applied in various scenarios where control over assistant behavior and response quality is essential, like call centers, help desks, and other customer support applications. Below is a reference architecture of the solutions: Key components of this solution include (components in dash boxes coming soon): Client-Side User Interface for Demonstration (coming soon) A web-based client-side interface is included in the accelerator solution, to showcase the accelerator solution in an interactive, user-friendly format. This web UI allows you to quickly explore and test this solution, such as its orchestration routing behavior and functionalities. Workflow Orchestration for Human-Controllable Conversations By combining services like CLU, CQA, and LLMs, the accelerator allows for a dynamic, adaptable workflow. CLU can recognize and route customer-defined intents, while CQA provides exact answers from predefined QA pairs. If a question falls outside the pre-defined scope, the workflow can seamlessly fall back to LLMs, which is enhanced with RAG for contextually relevant, accurate responses. This workflow ensures human-like adaptability while maintaining control over assistant responses. Conversational Language Understanding (CLU) for Intent Routing The CLU service allows you to define the top intents you want the assistants to handle. The top intents can be those critical to your business and/or those most users ask your assistants. This component plays a central role in directing conversations by interpreting user intents and routing them to the right action or AI agents. Whether completing a task or addressing specific customer needs, CLU provides the mechanism to ensure the assistant accurately understands and executes the process of handling custom-defined intents. Custom Question Answering (CQA) for Exact Answers and with No Hallucinations CQA allows you to create and manage predefined QA pairs to deliver precise responses, reducing ambiguity and ensuring that the assistant aligns closely with defined answers. This controlled response approach maintains consistency in interactions, improving reliability, particularly for high-stake or regulatory-sensitive conversations. You can also attach CQA to your existing RAG solution to quickly fix incorrect answers. PII Detection and Redaction for Privacy Protection (coming soon) Protecting user privacy is a top priority, especially in conversational AI. This accelerator showcases an optional integration of Azure AI Language’s Personally Identifiable Information (PII) to automatically identify and redact sensitive information, if compliance with privacy standards and regulations is required LLM with RAG to Handle Everything Else (coming soon) In this accelerator, we are using a RAG solution to handle missed intents or user queries on lower-priority topics. This RAG solution can be replaced with your existing one. The predefined intents and question-answer pairs can be appended and updated over time based on evolving business needs and DSATs (dissatisfaction) discovered in the RAG responses. This approach ensures controlled and deterministic experiences for high-value or high-priority topics while maintaining flexibility and extensibility for lower-priority interactions. Components Configuration for "Plug-and-Play" One of the standout features of this accelerator is its flexibility through a "plug-and-play" component configuration. The architecture is designed to allow you to easily swap, add, or remove components to tailor the solution to your specific needs. Whether you want to add custom intents, adjust fallback mechanisms, or incorporate additional data sources, the modular nature of the accelerator makes it simple to configure. Get Started Building Your GenAI-Powered Assistant Today Our new accelerator is available on GitHub, ready for developers to deploy, customize, and use as a foundation for your own needs. Join us as we move towards a future where GenAI can empower organizations to meet business needs with intelligent, adaptable, and human-controllable assistants. What’s more: Other New Azure AI Language Releases This Ignite Beyond these, Azure AI Language provides additional capabilities to support GenAI customers in more scenarios to ensure quality, privacy and flexible deployment in any types of environments, either clouds or on premises. We are also excited to announce the following new features launching at Ignite. Azure AI Language in Azure AI Studio: Azure AI Language is moving to AI Studio. Extract PII from text, Extract PII from conversation, Summarize text, Summarize conversation, Summarize for call center, and Text Analytics for health are now available in AI Studio playground. More skills follow. Conversational Language Understanding (CLU): Today, customers use CLU to build custom natural language understanding models hosted by Azure to predict the overall intention of an incoming utterance and extract important information from it. However, some customers have specific needs that require an on-premise connection. We are excited to announce runtime containers for CLU for these specific use cases. PII Detection and Redaction: Azure AI Language offers Text PII and Conversational PII services to extract personally identifiable information from input text and conversation to enhance privacy and security, oftentimes before sending data to the cloud or an LLM. We are excited to announce new improvements to these services - the preview API (version 2024-11-15-preview) now supports the option to mask detected sensitive entities with a label (i.e. “John Doe received a call from 424-878-9192” can now be masked with an entity label, i.e. . “[PERSON_1] received a call from [PHONENUMBER_1]”. More on how to specify the redaction policy style for your outputs can be found in our documentation. Native document support: The gating control is removed with the latest API version, 2024-11-15-preview, allowing customers to access native document support for PII redaction and summarization. Key updates in this version include: - Increased Maximum File Size Limits (from 1 MB to 10 MB). - Enhanced PII Redaction Customization: Customers can now specify whether they want only the redacted document or both the redacted document and a JSON file containing the detected entities. Language detection: Language detection is a preconfigured feature that can detect the language a document is written in and returns a language code for a wide range of languages, variants, dialects, and some regional/cultural languages. We are happy to announce today the general availability of scription detection capability, and 16 more languages support, which adds up to 139 total supported languages. Named entity recognition (NER): The Named Entity Recognition (NER) service supports customer scenarios for identifying and analyzing entities such as addresses, names, and phone numbers from inputs text. NER’s Generally Available API (version 2024-11-01) now supports several optional input parameters (inclusionList, exclusionList, inferenceOptions, and overlapPolicy) as well as an updated output structure (with new fields tags, type, and metadata) to ensure enhanced user customization and deeper analysis. More on how to use these parameters can be found in our documentation. Text analytics for health: Text analytics for health (TA4H) is a preconfigured feature that extracts and labels relevant medical information from unstructured texts such as doctor's notes, discharge summaries, clinical documents, and electronic health records. Today, we released support for Fast Healthcare Interoperability Resources (FHIR) structuring and temporal assertion detection in the Generally Available API.1.6KViews3likes0CommentsAnnouncing Azure AI Content Understanding: Transforming Multimodal Data into Insights
Solve Common GenAI Challenges with Content Understanding As enterprises leverage foundation models to extract insights from multimodal data and develop agentic workflows for automation, it's common to encounter issues like inconsistent output quality, ineffective pre-processing, and difficulties in scaling out the solution. Organizations often find that to handle multiple types of data, the effort is fragmented by modality, increasing the complexity of getting started. Azure AI Content Understanding is designed to eliminate these barriers, accelerating success in Generative AI workflows. Handling Diverse Data Formats: By providing a unified service for ingesting and transforming data of different modalities, businesses can extract insights from documents, images, videos, and audio seamlessly and simultaneously, streamlining workflows for enterprises. Improving Output Data Accuracy: Deriving high-quality output for their use-cases requires practitioners to ensure the underlying AI is customized to their needs. Using advanced AI techniques like intent clarification, and a strongly typed schema, Content Understanding can effectively parse large files to extract values accurately. Reducing Costs and Accelerating Time-to-Value: Using confidence scores to trigger human review only when needed minimizes the total cost of processing the content. Integrating the different modalities into a unified workflow and grounding the content when applicable allows for faster reviews. Core Features and Advantages Azure AI Content Understanding offers a range of innovative capabilities that improve efficiency, accuracy, and scalability, enabling businesses to unlock deeper value from their content and deliver a superior experience to their end users. Multimodal Data Ingestion and Content Extraction: The service ingests a variety of data types such as documents, images, audio, and video, transforming them into a structured format that can be easily processed and analyzed. It instantly extracts core content from your data including transcriptions, text, faces, and more. Data Enrichment: Content Understanding offers additional features that enhance content extraction results, such as layout elements, barcodes, and figures in documents, speaker recognition and diarization in audio, and more. Schema Inferencing: The service offers a set of prebuilt schemas and allows you to build and customize your own to extract exactly what you need from your data. Schemas allow you to extract a variety of results, generating task-specific representations like captions, transcripts, summaries, thumbnails, and highlights. This output can be consumed by downstream applications for advanced reasoning and automation. Post Processing: Enhances service capabilities with generative AI tools that ensure the accuracy and usability of extracted information. This includes providing confidence scores for minimal human intervention and enabling continuous improvement through user feedback. Transformative Applications Across Industries Azure AI Content Understanding is ideal for a wide range of use cases and industries, as it is fully customizable and allows for the input of data from multiple modalities. Here are just a few examples of scenarios Content Understanding is powering today: Post call analytics: Customers utilize Azure AI Content Understanding to extract analytics on call center or recorded meeting data, allowing you to aggregate data on the sentiment, speakers, and content discussed, including specific names, companies, user data, and more. Media asset management and content creation assistance: Extract key features from images and videos to better manage media assets and enable search on your data for entities like brands, setting, key products, people, and more. Insurance claims: Analyze and process insurance claims and other low-latency batch processing scenarios to automate previously time-intensive processes. Highlight video reel generation: With Content Understanding, you can automatically identify key moments in a video to extract highlights and summarize the full content. For example, automatically generate a first draft of highlight reels from conferences, seminars, or corporate events by identifying key moments and significant announcements. Retrieval Augmented Generation (RAG): Ingest and enrich content of any modality to effectively find answers to common questions in scenarios like customer service agents, or power content search scenarios across all types of data. Customer Success with Content Understanding Customers all over the world are already finding unique and powerful ways to accelerate their inferencing and unlock insights on their data by leveraging the multi modal capabilities of Content Understanding. Here are a few examples of how customers are unlocking greater value from their data: Philips: Philips Speech Processing Solutions (SPS) is a global leader in dictation and speech-to-text solutions, offering innovative hardware and software products that enhance productivity and efficiency for professionals worldwide. Content Understanding enables Philips to power their speech-to-result solution, allowing customers to use voice to generate accurate, ready-to-use documentation. “With Azure AI Content Understanding, we're taking Philips SpeechLive, our speech-to-result solution to a whole new level. Imagine speaking, and getting fully generated, accurate documents—ready to use right away, thanks to powerful AI speech analytics that work seamlessly with all the relevant data sources.” – Thomas Wagner, CTO Philips Dictation Services WPP: WPP, one of the world’s largest advertising and marketing services providers, is revolutionizing website experiences using Azure AI Content Understanding. SJR, a content tech firm within WPP, is leveraging this technology for SJR Generative Experience Manager (GXM) which extracts data from all types of media on a company's website—including text, audio, video, PDFs, and images—to deliver intelligent, interactive, and personalized web experiences, with the support of WPP's AI technology company, Satalia. This enables them to convert static websites into dynamic, conversational interfaces, unlocking information buried deep within websites and presenting it as if spoken by the company's most knowledgeable salesperson. Through this innovation, WPP's SJR is enhancing customer engagement and driving conversion for their clients. ASC: ASC Technologies is a global leader in providing software and cloud solutions for omni-channel recording, quality management, and analytics, catering to industries such as contact centers, financial services, and public safety organizations. ASC utilizes Content Understanding to enhance their compliance analytics solution, streamlining processes and improving efficiency. "ASC expects to significantly reduce the time-to-market for its compliance analytics solutions. By integrating all the required capture modalities into one request, instead of customizing and maintaining various APIs and formats, we can cover a wide range of use cases in a much shorter time.” - Tobias Fengler, Chief Engineering Officer Numonix: Numonix AI specializes in capturing, analyzing, and managing customer interactions across various communication channels, helping organizations enhance customer experiences and ensure regulatory compliance. They are leveraging Content Understanding to capture insights from recorded call data from both audio and video to transcribe, analyze, and summarize the contents of calls and meetings, allowing them to ensure compliance across all conversations. “Leveraging Azure AI Content Understanding across multiple modalities has allowed us to supercharge the value of the recorded data Numonix captures on behalf of our customers. Enabling smarter communication compliance and security in the financial industry to fully automating quality management in the world’s largest call centers.” – Evan Kahan, CTO & CPO Numonix IPV Curator: A leader in media asset management solutions, IPV is leveraging Content Understanding to improve their metadata extraction capabilities to produce stronger industry specific metadata, advanced action and event analysis, and align video segmentation to specific shots in videos. IPV’s clients are now able to accelerate their video production, reduce editing time, access their content more quickly and easily. To learn more about how Content Understanding empowers video scenarios as well as how our customers such as IPV are using the service to power their unique media applications, check out Transforming Video Content into Business Value. Robust Security and Compliance Built using Azure’s industry-leading enterprise security, data privacy, and Responsible AI guidelines, Azure AI Content Understanding ensures that your data is handled with the utmost care and compliance and generates responses that align with Microsoft’s principles for responsible use of AI. We are excited to see how Azure AI Content Understanding will empower organizations to unlock their data's full potential, driving efficiency and innovation across various industries. Stay tuned as we continue to develop and enhance this groundbreaking service. Getting Started If you are at Microsoft Ignite 2024 or are watching online, check out this breakout session on Content Understanding. Learn more about the new Azure AI Content Understanding service here. Build your own Content Understanding solution in the Azure AI Foundry. For all documentation on Content Understanding, please refer to this page.4.3KViews1like0CommentsShare Your Experience with Azure AI and Support a Charity
AI is transforming how leaders tackle problem-solving and creativity across different industries. From creating realistic images to generating human-like text, the potential of large and small language model-powered applications is vast. Our goal at Microsoft is to continuously enhance our offerings and provide the best safe, secure, and private AI services and machine learning platform for developers, IT professionals and decision-makers who are paving the way for AI transformations. Are you using Azure AI to build your generative AI apps? We’re excited to invite our valued Azure AI customers to share their experiences and insights on Gartner Peer Insights. Your firsthand review not only helps fellow developers and decision-makers navigate their choices but also influences the evolution of our AI products. Write a Review: Microsoft Gartner Peer Insights https://gtnr.io/JK8DWRoL0.1.3KViews2likes0CommentsThe Azure Multimodal AI & LLM Processing Solution Accelerator
The Azure Multimodal AI & LLM Processing Accelerator is your one-stop-shop for all backend AI+LLM processing use cases like content summarization, extraction, classification and enrichment. This single accelerator supports all types of input data (text, documents, audio, image, video etc) and combines the best of Azure AI Services and LLMs to achieve reliable, consistent and scalable automation of tasks.5KViews2likes0CommentsAnnouncing conversational PII detection service’s general availability in Azure AI language
We are ecstatic to share the release of general availability (GA) support for our Conversational PII redaction service in English-language contexts. GA support ensures better Azure SLA support, production environment support, as well as enterprise-grade security...The Conversational PII redaction service expands upon the Text PII redaction service, supporting customers looking to identify, categorize, and redact sensitive information such as phone numbers and email addresses in unstructured text...These services can help to detect sensitive information and protect an individual’s identity and privacy in both generative and non-generative AI applications which are critical for highly regulated industries such as financial services, healthcare or government, enabling our customers to adhere to the highest standards of data privacy, security, and compliance.7.2KViews1like0CommentsLanguage in Azure AI prompt flow
Prompt flow in Azure AI Studio is a development tool designed to streamline the entire development cycle of AI applications powered by Large Language Models (LLMs). Last Ignite, we announced Azure AI Language prompt flow available on GitHub. Today, we are excited to announce that Azure AI Language tooling is now available in prompt flow natively. With that, you can explore, quickly start to use and fine-tune various natural language processing capabilities from Azure AI Language, reducing your time to valueand deploying solutions with reliable evaluation. The Azure AI Language sample flows in Azure AI prompt flow gallery are good starting point for you. You can simply start by cloning one of the two sample flows: Analyze Documents: This flow is designed to analyze and extract insights from textual document input, such as identifying named entities, redacting Personal Identifiable Information (PII), analyzing sentiments, summarizing main points and translating to other languages. Analyze Conversations: This flow is designed for conversational input and particularly useful for contact center analytics or meeting review, such as summarizing customer issues and resolution, analyzing customer sentiment trend during calls, redacting PII, chaptering long meeting into segments making it easy to navigate and find topics of interest. Then you will see a wizard that guides you to configure tools in your flow, and run, evaluate, and deploy your flow: Graph view of your flow Files in your flow Azure AI Language tools in the “More tools” dropdown menu, which you can add capabilities that you need for your flow. There are more tools that you can add from LLM, Prompt, and Python menu. Configure output Configure steps (or tools) in the flow Run, evaluate, and deploy your flow What’s Next We will continue enhancing the underlying capabilities by leveraging state-of-the-art SLMs and LLMs, and enriching prompt flow offerings to further ease your effort in utilizing the best service Azure AI offers. Learn more about Azure AI Language in the following resources: Azure AI Language homepage: https://aka.ms/azure-language Azure AI Language product documentation: https://aka.ms/language-docs Azure AI Language product demo videos: https://aka.ms/language-videos Explore Azure AI Language in Azure AI Studio: https://aka.ms/AzureAiLanguage Prompt flow in Azure AI Studio: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/prompt-flow PyPl package (includes general documentation): promptflow-azure-ai-language · PyPI Azure AI Language prompt flow github examples (includes READMEs): promptflow/examples/flows/integrations/azure-ai-language at main · microsoft/promptflow · GitHub3.4KViews1like0Comments