natural language processing
52 TopicsWhat’s trending on Hugging Face: PubMedBERT Base Embeddings, Paraphrase Multilingual MiniLM, BGE-M3
The embedding model landscape has evolved beyond one-size-fits-all solutions. Today’s developers navigate a set of deliberate trade‑offs: domain specialization to improve accuracy in vertical applications, multilingual capabilities to support global use cases, and retrieval strategies that optimize performance at scale. Once a model demonstrates strong semantic performance, predictable behavior, and broad community support, it often becomes a trusted reference baseline that developers build around and deploy with confidence. This week, we’re not spotlighting models that are new to Microsoft Foundry. Instead, we’re turning our attention to models that have managed to stay relevant in a rapidly expanding sea of options. This week's Model Monday's edition highlights three Hugging Face models including NeuML's PubMedBERT Base Embeddings for domain-specific medical text understanding, Sentence Transformers' Paraphrase Multilingual MiniLM for lightweight cross-lingual semantic similarity, and BAAI's BGE-M3 for multi-functional long-context retrieval across 100+ languages. Models of the week NeuML: PubMedBERT Base Embeddings Model Specs Parameters / size: 109M Context length: 512 tokens Primary task: Embeddings (medical domain) Why it's interesting Domain-specific performance gains: Fine-tuned on PubMed title-abstract pairs, achieving 95.62% average Pearson correlation across medical benchmarks—outperforming general-purpose models like gte-base (95.37%), bge-base-en-v1.5 (93.78%), and all-MiniLM-L6-v2 (93.46%) on medical literature tasks Production-validated for medical RAG: With 141K downloads and deployment in 30+ medical AI applications, this model demonstrates consistent real-world performance for clinical research, drug discovery, and biomedical semantic search pipelines Built on Microsoft's BiomedNLP foundation: Extends BioMed BERT family with sentence-transformers mean pooling, creating 768-dimensional embeddings optimized for medical literature clustering and retrieval Try it Clinical research sample prompt: Industry specific sample prompt: You're building a clinical decision support system for oncology. Deploy PubMedBERT Base Embeddings in Microsoft Foundry to index 50,000 recent cancer research abstracts from PubMed. A physician queries: "What are the cardiotoxicity risks of combining checkpoint inhibitors with anthracycline chemotherapy in elderly patients?" Embed the query, retrieve the top 10 most semantically similar abstracts using cosine similarity, and return citations with PubMed IDs for evidence-based treatment planning. Sentence Transformers: Paraphrase Multilingual MiniLM L12 v2 Model Specs Parameters / size: 117M Context length: 128 tokens Primary task: Embeddings (multilingual, sentence similarity) Why it's interesting Multilingual adoption: Supports 50+ languages including Arabic, Chinese, Hebrew, Hindi, Japanese, Korean, Russian, Thai, and Vietnamese—with 18.4 million downloads last month demonstrating production-scale validation across global deployments Compact architecture for edge deployment: At 117M parameters producing 384-dimensional embeddings, this model balances multilingual coverage with inference efficiency, making it ideal for resource-constrained environments or high-throughput applications Sentence-BERT foundation: Based on the influential Sentence-BERT paper (Reimers & Gurevych, 2019), using siamese BERT networks with mean pooling to create semantically meaningful sentence embeddings for clustering, paraphrase detection, and cross-lingual search Community-proven versatility: With 299 fine-tuned variants and 100+ Spaces implementations, this model serves as a peer reviewed starting point for multilingual semantic similarity tasks, from customer support ticket routing to cross-lingual document retrieval Try it E-commerce sample prompt: You're building a global customer support platform for an e-commerce company operating in 30 countries. Deploy Paraphrase Multilingual MiniLM in Microsoft Foundry to process incoming support tickets in English, Spanish, French, German, Portuguese, Japanese, and Korean. Embed each ticket as a 384-dimensional vector and cluster by semantic similarity to automatically route issues to specialized teams (payment, shipping, returns, technical). Flag duplicate tickets with cosine similarity > 0.85 to prevent redundant responses. BAAI: BGE-M3 Model Specs Parameters / size: ~560M Context length: 8192 tokens Primary task: Embeddings (multi-functional: dense, sparse, multi-vector) Why it's interesting Three retrieval modes in one model: Uniquely supports dense retrieval (1024-dim embeddings), sparse retrieval (lexical matching like BM25), and multi-vector retrieval (ColBERT-style fine-grained matching)—enabling hybrid search pipelines without maintaining separate models or indexes Exceptional long-context capability: 8192-token context window handles full documents, legal contracts, research papers, and lengthy technical content—validated on MLDR (13-language document retrieval) and NarrativeQA (long-form question answering) benchmarks Multilingual dominance: Outperforms OpenAI embeddings on MIRACL multilingual retrieval across 13+ languages and demonstrates strong zero-shot cross-lingual transfer on MKQA. Try it Legal document search sample prompt: You're building a legal document search system for a multinational law firm. Deploy BGE-M3 in Microsoft Foundry to index 5,000 full-length commercial contracts (average 6,000 tokens each) in English, French, German, and Spanish. A lawyer queries: "Find all force majeure clauses that exclude liability for pandemics or global health emergencies." Use hybrid retrieval: (1) dense embeddings for semantic similarity to capture concept variations like "Act of God" or "unforeseen circumstances", (2) sparse retrieval for exact keyword matches on "force majeure", "pandemic", "health emergency". Combine scores with weighted sum (0.6 dense + 0.4 sparse) and return top 15 contract sections with clause numbers and jurisdiction metadata. Getting started You can deploy open-source Hugging Face models directly in Microsoft Foundry by browsing the Hugging Face collection in the Foundry model catalog and deploying to managed endpoints in just a few clicks. You can also start from the Hugging Face Hub. First, select any supported model and then choose "Deploy on Microsoft Foundry", which brings you straight into Azure with secure, scalable inference already configured. Learn how to discover models and deploy them using Microsoft Foundry documentation. Follow along the Model Mondays series and access the GitHub to stay up to date on the latest Read Hugging Face on Azure docs Learn about one-click deployments from the Hugging Face Hub on Microsoft Foundry Explore models in Microsoft Foundry242Views0likes0CommentsNow in Foundry: Qwen3-Coder-Next, Qwen3-ASR-1.7B, Z-Image
This week's spotlight features three models from that demonstrate enterprise-grade AI across the full scope of modalities. From low latency coding agents to state-of-the-art multilingual speech recognition and foundation-quality image generation, these models showcase the breadth of innovation happening in open-source AI. Each model balances performance with practical deployment considerations, making them viable for production systems while pushing the boundaries of what's possible in their respective domains. This week's Model Mondays edition highlights Qwen3-Coder-Next, an 80B MoE model that activates only 3B parameters while delivering coding agent capabilities with 256k context; Qwen3-ASR-1.7B, which achieves state-of-the-art accuracy across 52 languages and dialects; and Z-Image from Tongyi-MAI, an undistilled text-to-image foundation model with full Classifier-Free Guidance support for professional creative workflows. Models of the week Qwen: Qwen3-Coder-Next Model Specs Parameters / size: 80B total (3B activated) Context length: 262,144 tokens Primary task: Text generation (coding agents, tool use) Why it's interesting Extreme efficiency: Activates only 3B of 80B parameters while delivering performance comparable to models with 10-20x more active parameters, making advanced coding agents viable for local deployment on consumer hardware Built for agentic workflows: Excels at long-horizon reasoning, complex tool usage, and recovering from execution failures, a critical capability for autonomous development that go beyond simple code completion Benchmarks: Competitive performance with significantly larger models on SWE-bench and coding benchmarks (Technical Report) Try it Use Case Prompt Pattern Code generation with tool use Provide task context, available tools, and execution environment details Long-context refactoring Include full codebase context within 256k window with specific refactoring goals Autonomous debugging Present error logs, stack traces, and relevant code with failure recovery instructions Multi-file code synthesis Describe architecture requirements and file structure expectations Financial services sample prompt: You are a coding agent for a fintech platform. Implement a transaction reconciliation service that processes batches of transactions, detects discrepancies between internal records and bank statements, and generates audit reports. Use the provided database connection tool, logging utility, and alert system. Handle edge cases including partial matches, timing differences, and duplicate transactions. Include unit tests with 90%+ coverage. Qwen: Qwen3-ASR-1.7B Model Specs Parameters / size: 1.7B Context length: 256 tokens (default), configurable up to 4096 Primary task: Automatic speech recognition (multilingual) Why it's interesting All-in-one multilingual capability: Single 1.7B model handles language identification plus speech recognition for 30 languages, 22 Chinese dialects, and English accents from multiple regions—eliminating the need to manage separate models per language Specialized audio versatility: Transcribes not just clean speech but singing voice, songs with background music, and extended audio files, expanding use cases beyond traditional ASR to entertainment and media workflows State-of-the-art accuracy: Outperforms GPT-4o, Gemini-2.5, and Whisper-large-v3 across multiple benchmarks. English: Tedlium 4.50 WER vs 7.69/6.15/6.84; Chinese: WenetSpeech 4.97/5.88 WER vs 15.30/14.43/9.86 (Technical Paper) Language ID included: 97.9% average accuracy across benchmark datasets for automatic language identification, eliminating the need for separate language detection pipelines Try it Use Case Prompt Pattern Multilingual transcription Send audio files via API with automatic language detection Call center analytics Process customer service recordings to extract transcripts and identify languages Content moderation Transcribe user-generated audio content across multiple languages Meeting transcription Convert multilingual meeting recordings to text for documentation Customer support sample prompt: Deploy Qwen3-ASR-1.7B to a Microsoft Foundry endpoint and transcribe multilingual customer service calls. Send audio files via API to automatically detect the language (from 52 supported options including 30 languages and 22 Chinese dialects) and generate accurate transcripts. Process calls from customers speaking English, Spanish, Mandarin, Cantonese, Arabic, French, and other languages without managing separate models per language. Use transcripts for quality assurance, compliance monitoring, and customer sentiment analysis. Tongyi-MAI: Z-Image Model Specs Parameters / size: 6B Context length: N/A (text-to-image) Primary task: Text-to-image generation Why it's interesting Undistilled foundation model: Full-capacity base without distillation preserves complete training signal with Classifier-Free Guidance support (a technique that improves prompt adherence and output quality), enabling complex prompt engineering and negative prompting that distilled models cannot achieve High output diversity: Generates distinct character identities in multi-person scenes with varied compositions, facial features, and lighting, critical for creative applications requiring visual variety rather than consistency Aesthetic versatility: Handles diverse visual styles from hyper-realistic photography to anime and stylized illustrations within a single model, supporting resolutions from 512×512 to 2048×2048 at any aspect ratio with 28-50 inference steps (Technical Paper) Try it Use Case Prompt Pattern Multilingual transcription Send audio files via API with automatic language detection Call center analytics Process customer service recordings to extract transcripts and identify languages Content moderation Transcribe user-generated audio content across multiple languages Meeting transcription Convert multilingual meeting recordings to text for documentation E-commerce sample prompt: Professional product photography of a modern ergonomic office chair in a bright Scandinavian-style home office. Natural window lighting from left, clean white desk with laptop and succulent plant, light oak hardwood floor. Chair positioned at 45-degree angle showing design details. Photorealistic, commercial photography, sharp focus, 85mm lens, f/2.8, soft shadows. Getting started You can deploy open‑source Hugging Face models directly in Microsoft Foundry by browsing the Hugging Face collection in the Foundry model catalog and deploying to managed endpoints in just a few clicks. You can also start from the Hugging Face Hub. First, select any supported model and then choose "Deploy on Microsoft Foundry", which brings you straight into Azure with secure, scalable inference already configured. Learn how to discover models and deploy them using Microsoft Foundry documentation. Follow along the Model Mondays series and access the GitHub to stay up to date on the latest Read Hugging Face on Azure docs Learn about one-click deployments from the Hugging Face Hub on Microsoft Foundry Explore models in Microsoft Foundry751Views0likes0CommentsWhat is trending in Hugging Face on Microsoft Foundry? Feb, 2, 2026
Open‑source AI is moving fast, with important breakthroughs in reasoning, agentic systems, multimodality, and efficiency emerging every day. Hugging Face has been a leading platform where researchers, startups, and developers share and discover new models. Microsoft Foundry brings these trending Hugging Face models into a production‑ready experience, where developers can explore, evaluate, and deploy them within their Azure environment. Our weekly Model Monday’s series highlights Hugging Face models available in Foundry, focusing on what matters most to developers: why a model is interesting, where it fits, and how to put it to work quickly. This week’s Model Mondays edition highlights three Hugging Face models, including a powerful Mixture-of-Experts model from Z. AI designed for lightweight deployment, Meta’s unified foundation model for image and video segmentation, and MiniMax’s latest open-source agentic model optimized for complex workflows. Models of the week Z.AI’s GLM-4.7-flash Model Basics Model name: zai-org/GLM-4.7-Flash Parameters / size: 30B total -3B active Default settings: 131,072 max new tokens Primary task: Agentic, Reasoning and Coding Why this model matters Why it’s interesting: It utilizes a Mixture-of-Experts (MoE) architecture (30B total parameters and 3B active parameters) to offer a new option for lightweight deployment. It demonstrates strong performance on logic and reasoning benchmarks, outperforming similar sized models like gpt-oss-20b on AIME 25 and GPQA benchmarks. It supports advanced inference features like "Preserved Thinking" mode for multi-turn agentic tasks. Best‑fit use cases: Lightweight local deployment, multi-turn agentic tasks, and logical reasoning applications. What’s notable: From the Foundry catalog, users can deploy on a A100 instance or unsloth/GLM-4.7-Flash-GGUF on a CPU. ource SOTA scores among models of comparable size. Additionally, compared to similarly sized models, GLM-4.7-Flash demonstrates superior frontend and backend development capabilities. Click to see more: https://docs.z.ai Try it Use case Best‑practice prompt pattern Agentic coding (multi‑step repo work, debugging, refactoring) Treat the model as an autonomous coding agent, not a snippet generator. Explicitly require task decomposition and step‑by‑step execution, then a single consolidated result. Long‑context agent workflows (local or low‑cost autonomous agents) Call out long‑horizon consistency and context preservation. Instruct the model to retain earlier assumptions and decisions across turns. Now that you know GLM‑4.7‑Flash works best when you give it a clear goal and let it reason through a bounded task, here’s an example prompt that a product or engineering team might use to identify risks and propose mitigations: You are a software reliability analyst for a mid‑scale SaaS platform. Review recent incident reports, production logs, and customer issues to uncover edge‑case failures outside normal usage (e.g., rare inputs, boundary conditions, timing/concurrency issues, config drift, or unexpected feature interactions). Prioritize low‑frequency, high‑impact risks that standard testing misses. Recommend minimal, low‑cost fixes (validation, guardrails, fallback logic, or documentation). Deliver a concise executive summary with sections: Observed Edge Cases, Root Causes, User Impact, Recommended Lightweight Fixes, and Validation Steps. Meta's Segment Anything 3 (SAM3) Model Basics Model name: facebook/sam3 Parameters / size: 0.9B Primary task: Mask Generation, Promptable Concept Segmentation (PCS) Why this model matters Why it’s interesting: It handles a vastly larger set of open-vocabulary prompts than SAM 2, and unifies image and video segmentation capabilities. It includes a "SAM 3 Tracker" mode that acts as a drop-in replacement for SAM 2 workflows with improved performance. Best‑fit use cases: Open-vocabulary object detection, video object tracking, and automatic mask generation What’s notable: Introduces Promptable Concept Segmentation (PCS), allowing users to find all matching objects (e.g., "dial") via text prompt rather than just single instances. Try it This model enables users to identify specific objects within video footage and isolate them over extended periods. With just one line of code, it is possible to detect multiple similar objects simultaneously. The accompanying GIF demonstrates how SAM3 efficiently highlights players wearing white on the field as they appear and disappear from view. Additional examples are available at the following repository: https://github.com/facebookresearch/sam3/blob/main/assets/player.gif Use case Best‑practice prompt pattern Agentic coding (multi‑step repo work, debugging, refactoring) Treat SAM 3 as a concept detector, not an interactive click tool. Use short, concrete noun‑phrase concept prompts instead of describing the scene or asking questions. Example prompt: “yellow school bus” or “shipping containers”. Avoid verbs or full sentences. Video segmentation + object tracking Specify the same concept prompt once, then apply it across the video sequence. Do not restate the prompt per frame. Let the model maintain identity continuity. Example: “person wearing a red jersey”. Hard‑to‑name or visually subtle objects Use exemplar‑based prompts (image region or box) when text alone is ambiguous. Optionally combine positive and negative exemplars to refine the concept. Avoid over‑constraining with long descriptions. Using the GIF above as a leading example, here is a prompt that shows how SAM 3 turns raw sports footage into structured, reusable data. By identifying and tracking players based on visual concepts like jersey color so that sports leagues can turn tracked data into interactive experiences where automated player identification can relay stats, fun facts, etc when built into a larger application. Here is a prompt that will allow you to start identifying specific players across video: Act as a sports analytics operator analyzing football match footage. Segment and track all football players wearing blue jerseys across the video. Generate pixel‑accurate segmentation masks for each player and assign persistent instance IDs that remain stable during camera movement, zoom, and player occlusion. Exclude referees, opposing team jerseys, sidelines, and crowd. Output frame‑level masks and tracking metadata suitable for overlays, player statistics, and downstream analytics pipelines. MiniMax AI's MiniMax-M2.1 Model Basics Model name: MiniMaxAI/MiniMax-M2.1 Parameters / size: 229B-10B Active Default settings: 200,000 max new tokens Primary task: Agentic and Coding Why this model matters Why it’s interesting: It is optimized for robustness in coding, tool use, and long-horizon planning, outperforming Claude Sonnet 4.5 in multilingual scenarios. It excels in full-stack application development, capable of architecting apps "from zero to one”. Previous coding models focused on Python optimization, M2.1 brings enhanced capabilities in Rust, Java, Golang, C++, Kotlin, Objective-C, TypeScript, JavaScript, and other languages. The model delivers exceptional stability across various coding agent frameworks. Best‑fit use cases: Lightweight local deployment, multi-turn agentic tasks, and logical reasoning applications. What’s notable: The release of open-source weights for M2.1 delivers a massive leap over M2 on software engineering leaderboards. https://www.minimax.io/ Try it Use case Best‑practice prompt pattern End‑to‑end agentic coding (multi‑file edits, run‑fix loops) Treat the model as an autonomous coding agent, not a snippet generator. Explicitly require task decomposition and step‑by‑step execution, then a single consolidated result. Long‑horizon tool‑using agents (shell, browser, Python) Explicitly request stepwise planning and sequential tool use. M2.1’s interleaved thinking and improved instruction‑constraint handling are designed for complex, multi‑step analytical tasks that require evidence tracking and coherent synthesis, not conversational back‑and‑forth. Long‑context reasoning & analysis (large documents / logs) Declare the scope and desired output structure up front. MiniMax‑M2.1 performs best when the objective and final artifact are clear, allowing it to manage long context and maintain coherence. Because MiniMax‑M2.1 is designed to act as a long‑horizon analytical agent, it shines when you give it a clear end goal and let it work through large volumes of information—here’s a prompt a risk or compliance team could use in practice: You are a financial risk analysis agent. Analyze the following transaction logs and compliance policy documents to identify potential regulatory violations and systemic risk patterns. Plan your approach before executing. Work through the data step by step, referencing evidence where relevant. Deliver a final report with the following sections: Key Risk Patterns Identified, Supporting Evidence, Potential Regulatory Impact, Recommended Mitigations. Your response should be a complete, executive-ready report, not a conversational draft. Getting started You can deploy open‑source Hugging Face models directly in Microsoft Foundry by browsing the Hugging Face collection in the Foundry model catalog and deploying to managed endpoints in just a few clicks. You can also start from the Hugging Face Hub. First, select any supported model and then choose "Deploy on Microsoft Foundry", which brings you straight into Azure with secure, scalable inference already configured. Learn how to discover models and deploy them using Microsoft Foundry documentation. Follow along the Model Mondays series and access the GitHub to stay up to date on the latest Read Hugging Face on Azure docs Learn about one-click deployments from the Hugging Face Hub on Microsoft Foundry Explore models in Microsoft Foundry952Views0likes0CommentsOptiMind: A small language model with optimization expertise
Turning a real world decision problem into a solver ready optimization model can take days—sometimes weeks—even for experienced teams. The hardest part is often not solving the problem; it’s translating business intent into precise mathematical objectives, constraints, and variables. OptiMind is designed to try and remove that bottleneck. This optimization‑aware language model translates natural‑language problem descriptions into solver‑ready mathematical formulations, can help organizations move from ideas to decisions faster. Now available through public preview as an experimental model through Microsoft Foundry, OptiMind targets one of the more expertise‑intensive steps in modern optimization workflows. Addressing the Optimization Bottleneck Mathematical optimization underpins many enterprise‑critical decisions—from designing supply chains and scheduling workforces to structuring financial portfolios and deploying networks. While today’s solvers can handle enormous and complex problem instances, formulating those problems remains a major obstacle. Defining objectives, constraints, and decision variables is an expertise‑driven process that often takes days or weeks, even when the underlying business problem is well understood. OptiMind tries to address this gap by automating and accelerating formulation. Developed by Microsoft Research, OptiMind transforms what was once a slow, error‑prone modeling task into a streamlined, repeatable step—freeing teams to focus on decision quality rather than syntax. What makes OptiMind different? OptiMind is not just as a language model, but as a specialized system built for real-world optimization tasks. Unlike general-purpose large language models adapted for optimization through prompting, OptiMind is purpose-built for mixed integer linear programming (MILP), and its design reflects this singular focus. At inference time, OptiMind follows a multi‑stage process: Problem classification (e.g., scheduling, routing, network design) Hint retrieval tailored to the identified problem class Solution generation in solver‑compatible formats such as GurobiPy Optional self‑correction, where multiple candidate formulations are generated and validated This design can improve reliability without relying on agentic orchestration or multiple large models. In internal evaluations on cleaned public benchmarks—including IndustryOR, Mamo‑Complex, and OptMATH—OptiMind demonstrated higher formulation accuracy than similarly sized open models and competitive performance relative to significantly larger systems. OptiMind improved accuracy by approximately 10 percent over the base model. In comparison to open-source models under 32 billion parameters, OptiMind was also found to match or exceed performance benchmarks. For more information on the model, please read the official research blog or the technical paper for OptiMind. Practical use cases: Unlocking efficiency across domains OptiMind is especially valuable where modeling effort—not solver capability—is the primary bottleneck. Typical use cases include: Supply Chain Network Design: Faster formulation of multi‑period network models and logistics flows Manufacturing and Workforce Scheduling: Easier capacity planning under complex operational constraints Logistics and Routing Optimization: Rapid modeling that captures real‑world constraints and variability Financial Portfolio Optimization: More efficient exploration of portfolios under regulatory and market constraints By reducing the time and expertise required to move from problem description to validated model, OptiMind helps teams reach actionable decisions faster and with greater confidence. Getting started OptiMind is available today as an experimental model, and Microsoft Research welcomes feedback from practitioners and enterprise teams. Next steps: Explore the research details: Read more about the model on Foundry Labs and the technical paper on arXiv Try the model: Access OptiMind through Microsoft Foundry Test sample code: Available in the OptiMind GitHub repository Take the next step in optimization innovation with OptiMind—empowering faster, more accurate, and cost-effective problem solving for the future of decision intelligence.1.6KViews0likes0CommentsThe Future of AI: From Noise to Insight - An AI Agent for Customer Feedback
This post explores how Microsoft’s AI Futures team built a multi-agent system to transform scattered customer feedback into actionable insights. The solution aggregates feedback from multiple channels, uses advanced language models to cluster themes, summarize content, and identify sentiment, and delivers prioritized insights directly in Microsoft Teams. With human-in-the-loop safeguards, the system accelerates triage, prioritization, and follow-ups while maintaining compliance and traceability. Future enhancements include richer automation, trend visualization, and expanded feedback sources.509Views0likes0CommentsThe Future of AI: The paradigm shifts in Generative AI Operations
Dive into the transformative world of Generative AI Operations (GenAIOps) with Microsoft Azure. Discover how businesses are overcoming the challenges of deploying and scaling generative AI applications. Learn about the innovative tools and services Azure AI offers, and how they empower developers to create high-quality, scalable AI solutions. Explore the paradigm shift from MLOps to GenAIOps and see how continuous improvement practices ensure your AI applications remain cutting-edge. Join us on this journey to harness the full potential of generative AI and drive operational excellence.7.6KViews1like1CommentThe Future of AI: GraphRAG – A better way to query interlinked documents
All language models are trained on a huge corpus of data. They have some world knowledge and can answer a range of questions about different things. However, due to their probabilistic nature and incomplete world knowledge, especially when it comes to different niches and domains, it’s possible to receive incorrect answers. Retrieval Augmented Generation (RAG) helps augment world knowledge with enterprise-specific references, reducing inaccuracies and inconsistencies in the generated text. How RAG works and improves LLM output In RAG, the corpus of text relevant to your domain is converted into embeddings. Embeddings are created by translating documents into a mathematical form based on their traits, factors, and categories. The resulting vector representation is a long sequence of numbers. The distance between two vectors indicates how closely related they are. Similar objects are positioned closer together in a multi-dimensional embedding space, while less similar objects are positioned farther apart. As the term signifies, RAG consists of three steps – First the relevant vectors related to the query are retrieved (typically from a vector database), then the prompt which is sent to the LLM is augmented with this relevant contextual information, and finally the LLMs generates an answer based on this context and query. Using the RAG approach, developers can extend the factual grounding of the model, improve the relevance, accuracy and quality of the answers generated by the LLMs, and in many cases, refer back to the document snippets which were used in the generation of the answer. RAG has emerged as a powerful approach that combines the strengths of information retrieval and generative models. How GraphRAG builds upon RAG approach Though RAG improves on the LLMs generative capabilities, RAG does sometimes struggle to make sense of concepts and relationships between them when they are spread across documents. Also, as the complexity of data structures grows, there is a need for more advanced systems capable of handling interconnected, multi-faceted information. This is where GraphRAG comes into play. GraphRAG is an advanced version of RAG that utilizes graph-based retrieval mechanisms, enhancing the generation process by capturing richer, more contextual information. GraphRAG improves over vector RAG in the following ways. Enhanced Contextual Understanding with Graphs RAG traditionally uses a flat retrieval system (through embeddings in a vector DB), where it retrieves documents (and relevant document fragments) from a knowledge base based on their relevance to a query. The generative model then uses these retrieved documents to generate a response. While effective, this method can struggle when information is spread across multiple, interconnected documents. GraphRAG, on the other hand, uses graph-based retrieval, which allows it to connect pieces of information across a web of nodes. Each node represents an entity or a concept, and the edges represent the relationships between them. Examples of this could be relations like “is part of,” “is cousin of,” or “is made of.” This structured approach enables GraphRAG to extract and utilize more nuanced, multi-layered contextual information, resulting in more coherent and accurate responses. Improved Knowledge Integration In RAG, the generative model can sometimes produce fragmented or inconsistent outputs when the retrieved documents lack cohesion because of the way the chunking process and embedding vectors work. GraphRAG solves this by using graph databases that can model complex relationships. Graph Databases store both the entities represented by nodes and the relationships connecting them. They make it possible to traverse nodes using relationships between them. By understanding the connections between different pieces of information, GraphRAG can integrate knowledge from diverse sources and provide a more unified and accurate response. For example, if a question involves multiple entities and their interactions (e.g., "How does the supply chain impact product availability during a pandemic?"), GraphRAG can navigate through the interconnected data points, understand their dependencies, and generate a comprehensive answer. Another good example is compliance information for related documents and references to concepts in compliance. Let’s assume you are opening a restaurant and want to know different regulations needed to open a kitchen. Regulations can span fire safety, hygiene, food storage, ingredient sourcing, insurance, and labour guidelines. GraphRAG can work in such a scenario to collect all the references, traversing the relationships between them, giving users a coherent answer spanning a collection of documents. Efficiency and Scalability Another key metric, especially for large, interconnected datasets, is efficiency. RAG requires scanning through multiple documents for relevant content, which can be resource-intensive, especially with vast datasets. GraphRAG’s graph-based structure can efficiently traverse the data by focusing on relevant nodes and relationships, reducing computational overhead. Using GraphRAG intelligently, developers can use a combination of graph traversals of knowledge graphs and vector search to reduce computation and memory overheads. This s better, more intelligent indexing over traditional approaches. Moreover, graphs can be scaled horizontally, allowing for the expansion of knowledge bases without significantly increasing retrieval times. This makes GraphRAG suitable for enterprise-level applications where scalability and performance are critical. Also, when an organization spans many different vertical domains, this helps focus the search. So, you have the advantage both in terms of scalability and performance. GraphRAG Implementation Now that we know the benefits of GraphRAG, let’s implement an approach using GraphRAG. Setup For this demonstration we will use, we will use the GPT-4o as the LLM model in Azure AI Studio and text-embedding-3-small as the embedding model to generate embeddings on the platform. We will use the open source lancedb to store the embeddings and retrieve them for GraphRAG. There are many other models available via the Azure AI model catalog which has a variety of LLMs, SLMs, and embedding models. Let’s now create the deployments using Azure AI Studio for both these models. Next, let’s open a session on WSL to create a virtual env for Python. We will be using the Python package for GraphRAG for this demo. # Create a graphrag directory and change directory to try out this example $ mkdir graphrag $ cd graphrag/ # Install virtualenv package, create a virtual environment called venv_name # & change directory to it. We create a virtual environment so we can safely # install and experiment with package without changing the global Python # environment $ sudo apt-get install python3-virtualenv $ virtualenv -p python3 venv_name $ cd venv_name/ # Activate the virtual environment $ source bin/activate # Next, install the Python GraphRAG package in the virtual environment # created. This will download and install a number of packages and may # take a little time. Amongst other things, it will install the opensource # DataShaper data processing library that allows users to declaratively # express data pipelines, schemas, and related assets using well-defined # schemas $ pip install graphrag For the purposes of this demo, we will use the text of the Mahabharata. The Mahabharata is an epic Indian classical text that is divided into 18 chapters with a multitude of characters. It narrates the events that lead to the Kurukshetra war between two warring clans of cousins – Kauravas and Pandavas and the aftermath of the war. There are more than 100 human characters in the text who interact with each other and are also related to each other in some way. You can read about the epic text here and read about the many characters. We will use one of the translations of the epic text from project Gutenberg which is in the public domain. # Create the directory for input text and download the file using curl and # store it in the input directory. Though this is one document it consists of # many parts. The word count (634955) and line count (58868) in the # example below can be seen using wc commandline utility. $ mkdir -p ./mahabharata/input $ curl curl https://www.gutenberg.org/cache/epub/15474/pg15474.txt -o ./mahabharata/input/book.txt $ wc input/book.txt 58868 634955 3752942 input/book.txt # Next, we will initialize the environment for GraphRAG using the command: $ python -m graphrag.index --init --root ./mahabharata/ This will create a .env file and a settings.yaml file in the mahabharata directory. .env contains the environment variables required to run the GraphRAG pipeline. If the file is edited, a single environment variable will be defined, GRAPHRAG_API_KEY=<API_KEY>. This is the API key for the OpenAI API or Azure OpenAI Service endpoint. This can be replaced with an API key. API keys and other settings can be seen in the screenshot below (red highlight) in Azure AI Studio. In the llm section of settings.yaml, configure the following settings, llm: api_key: ${GRAPHRAG_API_KEY} type: azure_openai_chat # or openai_chat model: gpt-4o model_supports_json: true # recommended if this is available for your model. api_base: https://<your_instance_details>.openai.azure.com api_version: 2024-08-01-preview # please replace with your version deployment_name: gpt-4o In the embeddings section of settings.yaml , configure the following settings, llm: api_key: ${GRAPHRAG_API_KEY} type: azure_openai_embedding model: text-embedding-3-small api_base: https://<your_instance_details>.openai.azure.com api_version: 2024-08-01-preview # please replace with your version deployment_name: text-embedding-3-small Next, run the indexing process as a precursor to creating the embeddings. This will create a log to track the indexing process. This will start the chunking process, create the entities, figure out the relationship between different entities, generate graph relationships between the entities and finally after multiple processing create the final documents to be stored for retrieval in lanceDB. If the process is complete successfully, a message will appear which says, “All workflows completed successfully.” Note, there will be many warnings about deprecation which can be safely ignored - for now. $ python -m graphrag.index --root ./mahabharata/ Now that the embeddings have been created successfully, let's run a couple of queries to see if we can get answers about the characters and the relationships between them. $ python -m graphrag.query --root ./mahabharata --method global "Who is Duryodhana and How is he related to Arjuna?" creating llm client with {'api_key': 'REDACTED,len=32', 'type': "azure_openai_chat", 'model': 'gpt-4o', 'max_tokens': 4000, 'temperature': 0.0, 'top_p': 1.0, 'n': 1, 'request_timeout': 180.0, 'api_base': 'https://graphragdemo-inst.openai.azure.com', 'api_version': '2024-08-01-preview', 'organization': None, 'proxy': None, 'cognitive_services_endpoint': None, 'deployment_name': 'gpt-4o', 'model_supports_json': True, 'tokens_per_minute': 0, 'requests_per_minute': 0, 'max_retries': 10, 'max_retry_wait': 10.0, 'sleep_on_rate_limit_recommendation': True, 'concurrent_requests': 25} SUCCESS: Global Search Response: ### Duryodhana: A Central Figure in the Mahabharata Duryodhana is a pivotal character in the Indian epic, the Mahabharata. He is the eldest son of Dhritarashtra and Gandhari, making him the leader of the Kauravas, a group of a hundred brothers [Data: Reports (408, 397, 400, 275, +more)]. Duryodhana is known for his deep-seated enmity towards the Pandavas, particularly Arjuna, and his significant role in the Kurukshetra War, where he stands as a central antagonist [Data: Reports (408, 397, 569, 216, +more)]. ### Relationship with Arjuna Duryodhana and Arjuna are first cousins. Duryodhana is the son of Dhritarashtra, while Arjuna is the son of Pandu. Dhritarashtra and Pandu are brothers, making Duryodhana and Arjuna part of the same Kuru dynasty [Data: Reports (255, 398, 285, 177, 202, +more)]. This familial connection places them in direct conflict over the throne of Hastinapura, leading to the epic battle of Kurukshetra [Data: Reports (399, 216, 406, 440, +more)]. ### Rivalry and Conflict The relationship between Duryodhana and Arjuna is marked by intense rivalry and conflict. Duryodhana's ambition to rule Hastinapura and his enmity towards the Pandavas drive much of the narrative in the Mahabharata. This enmity is particularly highlighted during the Kurukshetra War, where Duryodhana leads the Kauravas against Arjuna and the Pandavas [Data: Reports (408, 397, 273, 202, +more)]. Their rivalry is a central theme in the epic, culminating in numerous battles and deceitful plots, including the infamous game of dice that led to the Pandavas' exile [Data: Reports (398, 255, 400, 256, +more)]. ### Conclusion Duryodhana's character is defined by his leadership of the Kauravas and his antagonistic relationship with the Pandavas, especially Arjuna. Their familial ties and subsequent rivalry form the crux of the Mahabharata's narrative, leading to the monumental conflict of the Kurukshetra War [Data: Reports (408, 397, 569, 216, +more)]. Let’s try another query for another character called Karna. $ python -m graphrag.query --root ./mahabharata --method global "Who is Karna and what are his main relationships?" creating llm client with {'api_key': 'REDACTED,len=32', 'type': "azure_openai_chat", 'model': 'gpt-4o', 'max_tokens': 4000, 'temperature': 0.0, 'top_p': 1.0, 'n': 1, 'request_timeout': 180.0, 'api_base': 'https://graphragdemo-inst.openai.azure.com', 'api_version': '2024-08-01-preview', 'organization': None, 'proxy': None, 'cognitive_services_endpoint': None, 'deployment_name': 'gpt-4o', 'model_supports_json': True, 'tokens_per_minute': 0, 'requests_per_minute': 0, 'max_retries': 10, 'max_retry_wait': 10.0, 'sleep_on_rate_limit_recommendation': True, 'concurrent_requests': 25} SUCCESS: Global Search Response: ### Karna: A Key Figure in the Mahabharata Karna, also known as the Son of Radha, Vasusena, and Radheya, is a pivotal character in the Indian epic, the Mahabharata. He is renowned for his exceptional martial prowess, unwavering loyalty, and tragic life. Born to Kunti and the Sun God, Surya, Karna's divine heritage endowed him with extraordinary abilities, including natural armor and ear-rings that made him nearly invincible [Data: Reports (373, 198, 465, 502, 155, +more)]. ### Key Relationships #### **Duryodhana** Karna's most significant relationship is with Duryodhana, the leader of the Kauravas. Duryodhana befriends Karna and installs him as the king of Anga, solidifying their bond. This relationship is marked by deep loyalty and mutual support, with Karna vowing to slay Arjuna and supporting Duryodhana in various schemes against the Pandavas [Data: Reports (390, 397, 373, 198, 465, +more)]. Karna's loyalty to Duryodhana is a defining aspect of his character, influencing many of his actions and decisions throughout the epic [Data: Reports (447, 440, 391, 383, 302)]. #### **Kunti** Karna's relationship with his mother, Kunti, is complex and filled with emotional tension. Kunti reveals to Karna that he is her son, born before her marriage to Pandu, which adds a layer of tragedy to his character. Despite this revelation, Karna chooses to remain loyal to Duryodhana and fight against his half-brothers, the Pandavas [Data: Reports (373, 198, 465, 502, 155, +more)]. #### **Arjuna** Karna's rivalry with Arjuna, one of the Pandavas, is a central theme in the Mahabharata. Both warriors are considered equals in skill and valor, and their final confrontation in the Kurukshetra war is one of the epic's most significant events. Karna's enmity with Arjuna is fueled by his loyalty to Duryodhana and his desire to prove his worth [Data: Reports (373, 198, 465, 502, 155, +more)]. #### **Surya** Karna's divine father, Surya, plays a crucial role in his life, often providing guidance and warnings. For instance, Surya forewarns Karna about Indra's intentions to obtain his ear-rings and coat of mail, which are sources of his invincibility [Data: Reports (518, 547, 391, 358, 371)]. #### **Indra** Karna's interactions with Indra, the king of the gods, are also notable. Indra, disguised as a Brahmin, tricks Karna into giving up his ear-rings and armor, which were his sources of invincibility. In return, Indra grants Karna a powerful weapon, the Sakti, which he can use only once [Data: Reports (302, 394)]. ### Conclusion Karna's life is marked by his unwavering loyalty to Duryodhana, his complex relationships with his mother Kunti and his half-brother Arjuna, and his divine heritage. These relationships shape his actions and decisions, making him one of the most compelling and tragic figures in the Mahabharata [Data: Reports (390, 397, 373, 198, 465, +more)]. GraphRAG is able to piece together the relevant bits from different parts of the chapters to offer get us the relationship between the different characters with references (data reports or chunks). In some cases, it can do this over many different chunks of data over a large text. This is a huge improvement over the baseline performance of large language models and baseline vector RAG. In a recent Benchmark paper, it was found that knowledge graphs can improve the accuracy of answers up to 3x (54.2% vs 16.7%). GraphRAG can also be used in applications to make them more scalable and accurate, especially for domain-specific applications. Also, if you are working with many documents such as in data lake or running this is production, I would suggest using Azure AI search as the vector store. The GraphRAG accelerator, More information about GraphRAG and Azure AI Studio is available in the resources below: Resources: Learn more about GraphRAG Build with Azure AI Studio – https://ai.azure.com Review the Azure AI Studio documentation - https://learn.microsoft.com/en-us/azure/ai-studio/ Access Azure AI Studio Learn modules - https://learn.microsoft.com/en-us/training/modules/introduction-to-azure-ai-studio/ Access the Fundamental of Generative AI learning course- https://learn.microsoft.com/en-us/training/modules/fundamentals-generative-ai/ Access the GraphRAG GitHub repository - - https://github.com/microsoft/graphrag/ Use the GraphRAG Solution accelerator - https://github.com/Azure-Samples/graphrag-accelerator6.9KViews1like0CommentsThe Future of AI: Unleashing the Potential of AI Translation
The Co-op Translator automates the translation of markdown files and text within images using Azure AI Foundry. This open-source tool leverages advanced Large Language Model (LLM) technology through Azure OpenAI Services and Azure AI Vision to provide high-quality translations. Designed to break language barriers, the Co-op Translator features an easy-to-use command line interface and Python package, making technical content globally accessible with minimal manual effort.1.1KViews0likes0CommentsThe Future of AI: How Lovable.dev and Azure OpenAI Accelerate Apps that Change Lives
Discover how Charles Elwood, a Microsoft AI MVP and TEDx Speaker, leverages Lovable.dev and Azure OpenAI to create impactful AI solutions. From automating expense reports to restoring voices, translating gestures to speech, and visualizing public health data, Charles's innovations are transforming lives and democratizing technology. Follow his journey to learn more about AI for good.2.3KViews2likes0CommentsThe Future of AI: Developing Code Assist – a Multi-Agent Tool
Discover how Code Assist, created with Azure AI Foundry Agent Service, uses AI agents to automate code documentation, generate business-ready slides, and detect security risks in large codebases—boosting developer productivity and project clarity.1.9KViews2likes1Comment