rag
73 TopicsIntroducing langchain-azure-storage: Azure Storage integrations for LangChain
We're excited to introduce langchain-azure-storage , the first official Azure Storage integration package built by Microsoft for LangChain 1.0. As part of its launch, we've built a new Azure Blob Storage document loader (currently in public preview) that improves upon prior LangChain community implementations. This new loader unifies both blob and container level access, simplifying loader integration. More importantly, it offers enhanced security through default OAuth 2.0 authentication, supports reliably loading millions to billions of documents through efficient memory utilization, and allows pluggable parsing, so you can leverage other document loaders to parse specific file formats. What are LangChain document loaders? A typical Retrieval‑Augmented Generation (RAG) pipeline follows these main steps: Collect source content (PDFs, DOCX, Markdown, CSVs) — often stored in Azure Blob Storage. Parse into text and associated metadata (i.e., represented as LangChain Document objects). Chunk + embed those documents and store in a vector store (e.g., Azure AI Search, Postgres pgvector, etc.). At query time, retrieve the most relevant chunks and feed them to an LLM as grounded context. LangChain document loaders make steps 1–2 turnkey and consistent so the rest of the stack (splitters, vector stores, retrievers) “just works”. See this LangChain RAG tutorial for a full example of these steps when building a RAG application in LangChain. How can the Azure Blob Storage document loader help? The langchain-azure-storage package offers the AzureBlobStorageLoader , a document loader that simplifies retrieving documents stored in Azure Blob Storage for use in a LangChain RAG application. Key benefits of the AzureBlobStorageLoader include: Flexible loading of Azure Storage blobs to LangChain Document objects. You can load blobs as documents from an entire container, a specific prefix within a container, or by blob names. Each document loaded corresponds 1:1 to a blob in the container. Lazy loading support for improved memory efficiency when dealing with large document sets. Documents can now be loaded one-at-a-time as you iterate over them instead of all at once. Automatically uses DefaultAzureCredential to enable seamless OAuth 2.0 authentication across various environments, from local development to Azure-hosted services. You can also explicitly pass your own credential (e.g., ManagedIdentityCredential , SAS token). Pluggable parsing. Easily customize how documents are parsed by providing your own LangChain document loader to parse downloaded blob content. Using the Azure Blob Storage document loader Installation To install the langchain-azure-storage package, run: pip install langchain-azure-storage Loading documents from a container To load all blobs from an Azure Blob Storage container as LangChain Document objects, instantiate the AzureBlobStorageLoader with the Azure Storage account URL and container name: from langchain_azure_storage.document_loaders import AzureBlobStorageLoader loader = AzureBlobStorageLoader( "https://<your-storage-account>.blob.core.windows.net/", "<your-container-name>" ) # lazy_load() yields one Document per blob for all blobs in the container for doc in loader.lazy_load(): print(doc.metadata["source"]) # The "source" metadata contains the full URL of the blob print(doc.page_content) # The page_content contains the blob's content decoded as UTF-8 text Loading documents by blob names To only load specific blobs as LangChain Document objects, you can additionally provide a list of blob names: from langchain_azure_storage.document_loaders import AzureBlobStorageLoader loader = AzureBlobStorageLoader( "https://<your-storage-account>.blob.core.windows.net/", "<your-container-name>", ["<blob-name-1>", "<blob-name-2>"] ) # lazy_load() yields one Document per blob for only the specified blobs for doc in loader.lazy_load(): print(doc.metadata["source"]) # The "source" metadata contains the full URL of the blob print(doc.page_content) # The page_content contains the blob's content decoded as UTF-8 text Pluggable parsing By default, loaded Document objects contain the blob's UTF-8 decoded content. To parse non-UTF-8 content (e.g., PDFs, DOCX, etc.) or chunk blob content into smaller documents, provide a LangChain document loader via the loader_factory parameter. When loader_factory is provided, the AzureBlobStorageLoader processes each blob with the following steps: Downloads the blob to a new temporary file Passes the temporary file path to the loader_factory callable to instantiate a document loader Uses that loader to parse the file and yield Document objects Cleans up the temporary file For example, below shows parsing PDF documents with the PyPDFLoader from the langchain-community package: from langchain_azure_storage.document_loaders import AzureBlobStorageLoader from langchain_community.document_loaders import PyPDFLoader # Requires langchain-community and pypdf packages loader = AzureBlobStorageLoader( "https://<your-storage-account>.blob.core.windows.net/", "<your-container-name>", prefix="pdfs/", # Only load blobs that start with "pdfs/" loader_factory=PyPDFLoader # PyPDFLoader will parse each blob as a PDF ) # Each blob is downloaded to a temporary file and parsed by PyPDFLoader instance for doc in loader.lazy_load(): print(doc.page_content) # Content parsed by PyPDFLoader (yields one Document per page in the PDF) This file path-based interface allows you to use any LangChain document loader that accepts a local file path as input, giving you access to a wide range of parsers for different file formats. Migrating from community document loaders to langchain-azure-storage If you're currently using AzureBlobStorageContainerLoader or AzureBlobStorageFileLoader from the langchain-community package, the new AzureBlobStorageLoader provides an improved alternative. This section provides step-by-step guidance for migrating to the new loader. Steps to migrate To migrate to the new Azure Storage document loader, make the following changes: Depend on the langchain-azure-storage package Update import statements from langchain_community.document_loaders to langchain_azure_storage.document_loaders . Change class names from AzureBlobStorageFileLoader and AzureBlobStorageContainerLoader to AzureBlobStorageLoader . Update document loader constructor calls to: Use an account URL instead of a connection string. Specify UnstructuredLoader as the loader_factory to continue to use Unstructured for parsing documents. Enable Microsoft Entra ID authentication in environment (e.g., run az login or configure managed identity) instead of using connection string authentication. Migration samples Below shows code snippets of what usage patterns look like before and after migrating from langchain-community to langchain-azure-storage : Before migration from langchain_community.document_loaders import AzureBlobStorageContainerLoader, AzureBlobStorageFileLoader container_loader = AzureBlobStorageContainerLoader( "DefaultEndpointsProtocol=https;AccountName=<account>;AccountKey=<account-key>;EndpointSuffix=core.windows.net", "<container>", ) file_loader = AzureBlobStorageFileLoader( "DefaultEndpointsProtocol=https;AccountName=<account>;AccountKey=<account-key>;EndpointSuffix=core.windows.net", "<container>", "<blob>" ) After migration from langchain_azure_storage.document_loaders import AzureBlobStorageLoader from langchain_unstructured import UnstructuredLoader # Requires langchain-unstructured and unstructured packages container_loader = AzureBlobStorageLoader( "https://<account>.blob.core.windows.net", "<container>", loader_factory=UnstructuredLoader # Only needed if continuing to use Unstructured for parsing ) file_loader = AzureBlobStorageLoader( "https://<account>.blob.core.windows.net", "<container>", "<blob>", loader_factory=UnstructuredLoader # Only needed if continuing to use Unstructured for parsing ) What's next? We're excited for you to try the new Azure Blob Storage document loader and would love to hear your feedback! Here are some ways you can help shape the future of langchain-azure-storage : Show support for interface stabilization - The document loader is currently in public preview and the interface may change in future versions based on feedback. If you'd like to see the current interface marked as stable, upvote the proposal PR to show your support. Report issues or suggest improvements - Found a bug or have an idea to make the document loaders better? File an issue on our GitHub repository. Propose new LangChain integrations - Interested in other ways to use Azure Storage with LangChain (e.g., checkpointing for agents, persistent memory stores, retriever implementations)? Create a feature request or write to us to let us know. Your input is invaluable in making langchain-azure-storage better for the entire community! Resources langchain-azure GitHub repository langchain-azure-storage PyPI package AzureBlobStorageLoader usage guide AzureBlobStorageLoader documentation referenceLevel up your Python + AI skills with our complete series
We've just wrapped up our live series on Python + AI, a comprehensive nine-part journey diving deep into how to use generative AI models from Python. The series introduced multiple types of models, including LLMs, embedding models, and vision models. We dug into popular techniques like RAG, tool calling, and structured outputs. We assessed AI quality and safety using automated evaluations and red-teaming. Finally, we developed AI agents using popular Python agents frameworks and explored the new Model Context Protocol (MCP). To help you apply what you've learned, all of our code examples work with GitHub Models, a service that provides free models to every GitHub account holder for experimentation and education. Even if you missed the live series, you can still access all the material using the links below! If you're an instructor, feel free to use the slides and code examples in your own classes. If you're a Spanish speaker, check out the Spanish version of the series. Python + AI: Large Language Models 📺 Watch recording In this session, we explore Large Language Models (LLMs), the models that power ChatGPT and GitHub Copilot. We use Python to interact with LLMs using popular packages like the OpenAI SDK and LangChain. We experiment with prompt engineering and few-shot examples to improve outputs. We also demonstrate how to build a full-stack app powered by LLMs and explain the importance of concurrency and streaming for user-facing AI apps. Slides for this session Code repository with examples: python-openai-demos Python + AI: Vector embeddings 📺 Watch recording In our second session, we dive into a different type of model: the vector embedding model. A vector embedding is a way to encode text or images as an array of floating-point numbers. Vector embeddings enable similarity search across many types of content. In this session, we explore different vector embedding models, such as the OpenAI text-embedding-3 series, through both visualizations and Python code. We compare distance metrics, use quantization to reduce vector size, and experiment with multimodal embedding models. Slides for this session Code repository with examples: vector-embedding-demos Python + AI: Retrieval Augmented Generation 📺 Watch recording In our third session, we explore one of the most popular techniques used with LLMs: Retrieval Augmented Generation. RAG is an approach that provides context to the LLM, enabling it to deliver well-grounded answers for a particular domain. The RAG approach works with many types of data sources, including CSVs, webpages, documents, and databases. In this session, we walk through RAG flows in Python, starting with a simple flow and culminating in a full-stack RAG application based on Azure AI Search. Slides for this session Code repository with examples: python-openai-demos Python + AI: Vision models 📺 Watch recording Our fourth session is all about vision models! Vision models are LLMs that can accept both text and images, such as GPT-4o and GPT-4o mini. You can use these models for image captioning, data extraction, question answering, classification, and more! We use Python to send images to vision models, build a basic chat-with-images app, and create a multimodal search engine. Slides for this session Code repository with examples: openai-chat-vision-quickstart Python + AI: Structured outputs 📺 Watch recording In our fifth session, we discover how to get LLMs to output structured responses that adhere to a schema. In Python, all you need to do is define a Pydantic BaseModel to get validated output that perfectly meets your needs. We focus on the structured outputs mode available in OpenAI models, but you can use similar techniques with other model providers. Our examples demonstrate the many ways you can use structured responses, such as entity extraction, classification, and agentic workflows. Slides for this session Code repository with examples: python-openai-demos Python + AI: Quality and safety 📺 Watch recording This session covers a crucial topic: how to use AI safely and how to evaluate the quality of AI outputs. There are multiple mitigation layers when working with LLMs: the model itself, a safety system on top, the prompting and context, and the application user experience. We focus on Azure tools that make it easier to deploy safe AI systems into production. We demonstrate how to configure the Azure AI Content Safety system when working with Azure AI models and how to handle errors in Python code. Then we use the Azure AI Evaluation SDK to evaluate the safety and quality of output from your LLM. Slides for this session Code repository with examples: ai-quality-safety-demos Python + AI: Tool calling 📺 Watch recording In the final part of the series, we focus on the technologies needed to build AI agents, starting with the foundation: tool calling (also known as function calling). We define tool call specifications using both JSON schema and Python function definitions, then send these definitions to the LLM. We demonstrate how to properly handle tool call responses from LLMs, enable parallel tool calling, and iterate over multiple tool calls. Understanding tool calling is absolutely essential before diving into agents, so don't skip over this foundational session. Slides for this session Code repository with examples: python-openai-demos Python + AI: Agents 📺 Watch recording In the penultimate session, we build AI agents! We use Python AI agent frameworks such as the new agent-framework from Microsoft and the popular LangGraph framework. Our agents start simple and then increase in complexity, demonstrating different architectures such as multiple tools, supervisor patterns, graphs, and human-in-the-loop workflows. Slides for this session Code repository with examples: python-ai-agent-frameworks-demos Python + AI: Model Context Protocol 📺 Watch recording In the final session, we dive into the hottest technology of 2025: MCP (Model Context Protocol). This open protocol makes it easy to extend AI agents and chatbots with custom functionality, making them more powerful and flexible. We demonstrate how to use the Python FastMCP SDK to build an MCP server running locally and consume that server from chatbots like GitHub Copilot. Then we build our own MCP client to consume the server. Finally, we discover how easy it is to connect AI agent frameworks like LangGraph and Microsoft agent-framework to MCP servers. With great power comes great responsibility, so we briefly discuss the security risks that come with MCP, both as a user and as a developer. Slides for this session Code repository with examples: python-mcp-demo297Views0likes0Comments¡Curso oficial y gratuito de GenAI y Python! 🚀
¿Quieres aprender a usar modelos de IA generativa en tus aplicaciones de Python?Estamos organizando una serie de nueve transmisiones en vivo, en inglés y español, totalmente dedicadas a la IA generativa. Vamos a cubrir modelos de lenguaje (LLMs), modelos de embeddings, modelos de visión, y también técnicas como RAG, function calling y structured outputs. Además, te mostraremos cómo construir Agentes y servidores MCP, y hablaremos sobre seguridad en IA y evaluaciones, para asegurarnos de que tus modelos y aplicaciones generen resultados seguros. 🔗 Regístrate para toda la serie. Además de las transmisiones en vivo, puedes unirte a nuestras office hours semanales en el AI Foundry Discord de para hacer preguntas que no se respondan durante el chat. ¡Nos vemos en los streams! 👋🏻 Here’s your HTML converted into clean, readable text format (perfect for a newsletter, blog post, or social media caption): Modelos de Lenguaje 📅 7 de octubre, 2025 | 10:00 PM - 11:00 PM (UTC) 🔗 Regístrate para la transmisión en Reactor ¡Únete a la primera sesión de nuestra serie de Python + IA! En esta sesión, hablaremos sobre los Modelos de Lenguaje (LLMs), los modelos que impulsan ChatGPT y GitHub Copilot. Usaremos Python para interactuar con LLMs utilizando paquetes como el SDK de OpenAI y Langchain. Experimentaremos con prompt engineering y ejemplos few-shot para mejorar los resultados. También construiremos una aplicación full stack impulsada por LLMs y explicaremos la importancia de la concurrencia y el streaming en apps de IA orientadas al usuario. 👉 Si querés seguir los ejemplos en vivo, asegurate de tener una cuenta de GitHub. Embeddings Vectoriales 📅 8 de octubre, 2025 | 10:00 PM - 11:00 PM (UTC) 🔗 Regístrate para la transmisión en Reactor En la segunda sesión de Python + IA, exploraremos los embeddings vectoriales, una forma de codificar texto o imágenes como arrays de números decimales. Estos modelos permiten realizar búsquedas por similitud en distintos tipos de contenido. Usaremos modelos como la serie text-embedding-3 de OpenAI, visualizaremos resultados en Python y compararemos métricas de distancia. También veremos cómo aplicar cuantización y cómo usar modelos multimodales de embedding. 👉 Si querés seguir los ejemplos en vivo, asegurate de tener una cuenta de GitHub. Recuperación-Aumentada Generación (RAG) 📅 9 de octubre, 2025 | 10:00 PM - 11:00 PM (UTC) 🔗 Regístrate para la transmisión en Reactor En la tercera sesión, exploraremos RAG, una técnica que envía contexto al LLM para obtener respuestas más precisas dentro de un dominio específico. Usaremos distintas fuentes de datos —CSVs, páginas web, documentos, bases de datos— y construiremos una app RAG full-stack con Azure AI Search. Modelos de Visión 📅 14 de octubre, 2025 | 10:00 PM - 11:00 PM (UTC) 🔗 Regístrate para la transmisión en Reactor ¡La cuarta sesión trata sobre modelos de visión como GPT-4o y 4o-mini! Estos modelos pueden procesar texto e imágenes, generando descripciones, extrayendo datos, respondiendo preguntas o clasificando contenido. Usaremos Python para enviar imágenes a los modelos, crear una app de chat con imágenes e integrarlos en flujos RAG. 👉 Si querés seguir los ejemplos en vivo, asegurate de tener una cuenta de GitHub. Salidas Estructuradas 📅 15 de octubre, 2025 | 10:00 PM - 11:00 PM (UTC) 🔗 Regístrate para la transmisión en Reactor En la quinta sesión aprenderemos a hacer que los LLMs generen respuestas estructuradas según un esquema. Exploraremos el modo structured outputs de OpenAI y cómo aplicarlo para extracción de entidades, clasificación y flujos con agentes. 👉 Si querés seguir los ejemplos en vivo, asegurate de tener una cuenta de GitHub. Calidad y Seguridad 📅 16 de octubre, 2025 | 10:00 PM - 11:00 PM (UTC) 🔗 Regístrate para la transmisión en Reactor En la sexta sesión hablaremos sobre cómo usar IA de manera segura y evaluar la calidad de las salidas. Mostraremos cómo configurar Azure AI Content Safety, manejar errores en código Python y evaluar resultados con el SDK de Evaluación de Azure AI. Tool Calling 📅 21 de octubre, 2025 | 10:00 PM - 11:00 PM (UTC) 🔗 Regístrate para la transmisión en Reactor En la última semana de la serie, nos enfocamos en tool calling (function calling), la base para construir agentes de IA. Aprenderemos a definir herramientas en Python o JSON, manejar respuestas de los modelos y habilitar llamadas paralelas y múltiples iteraciones. 👉 Si querés seguir los ejemplos en vivo, asegurate de tener una cuenta de GitHub. Agentes de IA 📅 22 de octubre, 2025 | 10:00 PM - 11:00 PM (UTC) 🔗 Regístrate para la transmisión en Reactor ¡En la penúltima sesión construiremos agentes de IA! Usaremos frameworks como Langgraph, Semantic Kernel, Autogen, y Pydantic AI. Empezaremos con ejemplos simples y avanzaremos a arquitecturas más complejas como round-robin, supervisor, graphs y ReAct. Model Context Protocol (MCP) 📅 23 de octubre, 2025 | 10:00 PM - 11:00 PM (UTC) 🔗 Regístrate para la transmisión en Reactor Cerramos la serie con Model Context Protocol (MCP), la tecnología abierta más candente de 2025. Aprenderás a usar FastMCP para crear un servidor MCP local y conectarlo a chatbots como GitHub Copilot. También veremos cómo integrar MCP con frameworks de agentes como Langgraph, Semantic Kernel y Pydantic AI. Y, por supuesto, hablaremos sobre los riesgos de seguridad y las mejores prácticas para desarrolladores. ¿Querés que lo reformatee para publicación en Markdown (para blogs o repos) o en texto plano con emojis y separadores estilo redes sociales?Level up your Python Gen AI Skills from our free nine-part YouTube series!
Want to learn how to use generative AI models in your Python applications? We're putting on a series of nine live streams, in both English and Spanish, all about generative AI. We'll cover large language models, embedding models, vision models, introduce techniques like RAG, function calling, and structured outputs, and show you how to build Agents and MCP servers. Plus we'll talk about AI safety and evaluations, to make sure all your models and applications are producing safe outputs. 🔗 Register for the entire series. In addition to the live streams, you can also join a weekly office hours in our AI Discord to ask any questions that don't get answered in the chat. You can also scroll down to learn about each live stream and register for individual sessions. See you in the streams! 👋🏻 Large Language Models 7 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor Join us for the first session in our Python + AI series! In this session, we'll talk about Large Language Models (LLMs), the models that power ChatGPT and GitHub Copilot. We'll use Python to interact with LLMs using popular packages like the OpenAI SDK and Langchain. We'll experiment with prompt engineering and few-shot examples to improve our outputs. We'll also show how to build a full stack app powered by LLMs, and explain the importance of concurrency and streaming for user-facing AI apps. Vector embeddings 8 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor In our second session of the Python + AI series, we'll dive into a different kind of model: the vector embedding model. A vector embedding is a way to encode a text or image as an array of floating point numbers. Vector embeddings make it possible to perform similarity search on many kinds of content. In this session, we'll explore different vector embedding models, like the OpenAI text-embedding-3 series, with both visualizations and Python code. We'll compare distance metrics, use quantization to reduce vector size, and try out multimodal embedding models. Retrieval Augmented Generation 9 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor In our fourth Python + AI session, we'll explore one of the most popular techniques used with LLMs: Retrieval Augmented Generation. RAG is an approach that sends context to the LLM so that it can provide well-grounded answers for a particular domain. The RAG approach can be used with many kinds of data sources like CSVs, webpages, documents, databases. In this session, we'll walk through RAG flows in Python, starting with a simple flow and culminating in a full-stack RAG application based on Azure AI Search. Vision models 14 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor Our third stream in the Python + AI series is all about vision models! Vision models are LLMs that can accept both text and images, like GPT 4o and 4o-mini. You can use those models for image captioning, data extraction, question-answering, classification, and more! We'll use Python to send images to vision models, build a basic chat-on-images app, and build a multimodal search engine. Structured outputs 15 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor In our fifth stream of the Python + AI series, we'll discover how to get LLMs to output structured responses that adhere to a schema. In Python, all we need to do is define a @dataclass or a Pydantic BaseModel, and we get validated output that meets our needs perfectly. We'll focus on the structured outputs mode available in OpenAI models, but you can use similar techniques with other model providers. Our examples will demonstrate the many ways you can use structured responses, like entity extraction, classification, and agentic workflows. Quality and safety 16 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor Now that we're more than halfway through our Python + AI series, we're covering a crucial topic: how to use AI safely, and how to evaluate the quality of AI outputs. There are multiple mitigation layers when working with LLMs: the model itself, a safety system on top, the prompting and context, and the application user experience. Our focus will be on Azure tools that make it easier to put safe AI systems into production. We'll show how to configure the Azure AI Content Safety system when working with Azure AI models, and how to handle those errors in Python code. Then we'll use the Azure AI Evaluation SDK to evaluate the safety and quality of the output from our LLM. Tool calling 21 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor Now that we're more than halfway through our Python + AI series, we're covering a crucial topic: how to use AI safely, and how to evaluate the quality of AI outputs. There are multiple mitigation layers when working with LLMs: the model itself, a safety system on top, the prompting and context, and the application user experience. Our focus will be on Azure tools that make it easier to put safe AI systems into production. We'll show how to configure the Azure AI Content Safety system when working with Azure AI models, and how to handle those errors in Python code. Then we'll use the Azure AI Evaluation SDK to evaluate the safety and quality of the output from our LLM. AI agents 22 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor For the penultimate session of our Python + AI series, we're building AI agents! We'll use many of the most popular Python AI agent frameworks: Langgraph, Semantic Kernel, Autogen, Pydantic AI, and more. Our agents will start simple and then ramp up in complexity, demonstrating different architectures like hand-offs, round-robin, supervisor, graphs, and ReAct. Model Context Protocol 23 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor In the final session of our Python + AI series, we're diving into the hottest technology of 2025: MCP, Model Context Protocol. This open protocol makes it easy to extend AI agents and chatbots with custom functionality, to make them more powerful and flexible. We'll show how to use the official Python FastMCP SDK to build an MCP server running locally and consume that server from chatbots like GitHub Copilot. Then we'll build our own MCP client to consume the server. Finally, we'll discover how easy it is to point popular AI agent frameworks like Langgraph, Pydantic AI, and Semantic Kernel at MCP servers. With great power comes great responsibility, so we will briefly discuss the many security risks that come with MCP, both as a user and developer.Essential Microsoft Resources for MVPs & the Tech Community from the AI Tour
Unlock the power of Microsoft AI with redeliverable technical presentations, hands-on workshops, and open-source curriculum from the Microsoft AI Tour! Whether you’re a Microsoft MVP, Developer, or IT Professional, these expertly crafted resources empower you to teach, train, and lead AI adoption in your community. Explore top breakout sessions covering GitHub Copilot, Azure AI, Generative AI, and security best practices—designed to simplify AI integration and accelerate digital transformation. Dive into interactive workshops that provide real-world applications of AI technologies. Take it a step further with Microsoft’s Open-Source AI Curriculum, offering beginner-friendly courses on AI, Machine Learning, Data Science, Cybersecurity, and GitHub Copilot—perfect for upskilling teams and fostering innovation. Don’t just learn—lead. Access these resources, host impactful training sessions, and drive AI adoption in your organization. Start sharing today! Explore now: Microsoft AI Tour Resources.Level Up Your Python Game with Generative AI Free Livestream Series This October!
If you've been itching to go beyond basic Python scripts and dive into the world of AI-powered applications, this is your moment. Join Pamela Fox and Gwyneth Peña-Siguenza Gwthrilled to announce a brand-new free livestream series running throughout October, focused on Python + Generative AI and this time, we’re going even deeper with Agents and the Model Context Protocol (MCP). Whether you're just starting out with LLMs or you're refining your multi-agent workflows, this series is designed to meet you where you are and push your skills to the next level. 🧠 What You’ll Learn Each session is packed with live coding, hands-on demos, and real-world examples you can run in GitHub Codespaces. Here's a taste of what we’ll cover: 🎥 Why Join? Live coding: No slides-only sessions — we build together, step by step. All code shared: Clone and run in GitHub Codespaces or your local setup. Community support: Join weekly office hours and our AI Discord for Q&A and deeper dives. Modular learning: Each session stands alone, so you can jump in anytime. 🔗 Register for the full series 🌍 ¿Hablas español? We’ve got you covered! Gwyneth Peña-Siguenza will be leading a parallel series in Spanish, covering the same topics with localized examples and demos. 🔗 Regístrese para la serie en español Whether you're building your first AI app or architecting multi-agent systems, this series is your launchpad. Come for the code, stay for the community — and leave with a toolkit that scales. Let’s build something brilliant together. 💡 Join the discussions and share your exprience at the Azure AI Discord CommunityFueling the Agentic Web Revolution with NLWeb and PostgreSQL
We’re excited to announce that NLWeb (Natural Language Web), Microsoft’s open project for natural language interfaces on websites now supports PostgreSQL. With this enhancement, developers can leverage PostgreSQL and NLWeb to transform any website into an AI-powered application or Model Context Protocol (MCP) server. This integration allows organizations to utilize a familiar, robust database as the foundation for conversational AI experiences, streamlining deployment and maximizing data security and scalability. Soon, autonomous agents, not just human users, will consume and interpret website content, transforming how information is accessed and utilized online. During Microsoft //Build 2025, Microsoft introduced the era of the open agentic web, in which the internet is an open agentic web a new paradigm in which autonomous agents seamlessly interact across individual, organizational, team and end-to-end business contexts. To realize the future of an open agentic web, Microsoft announced the NLWeb project. NLWeb transforms any website to an AI-powered application with just a few lines of code and by connecting to an AI model and a knowledge base. In this post, we’ll cover: What NLWeb is and how it works with vector databases How pgvector enables vector similarity search in PostgreSQL for NLWeb Get started using NLWeb with Postgres Let’s dive in and see how Postgres + NLWeb can redefine conversational web interfaces while keeping your data in a familiar, powerful database. What is NLWeb? A Quick Overview of Conversational Web Interfaces NLWeb is an open project developed by Microsoft to simplify adding conversational AI interfaces to websites. How NLWeb works under the hood: Processes existing data/website content that exists in semi-structured formats like Schema.org, RSS, and other data that websites already publish Embeds and indexes all the content in a vector store (i.e PostgreSQL with pgvector) Routes user queries through several processes which handle natural langague understanding, reranking and retrieval. Answers queries with an LLM The result is a high-quality natural language interface on top of web data, giving developers the ability to let users “talk to” web data. By default, every NLWeb instance is also a Model Context Protocol (MCP) server, allowing websites to make their content discoverable and accessible to agents and other participants in the MCP ecosystem if they choose. Importantly, NLWeb is platform-agnostic and supports many major operating systems, AI models, and vector stores and the NLWeb project is modular by design, so developers can bring their own retrieval system, model APIs, and define their own extensions. NLWeb with PostgreSQL PostgreSQL is now embedded into the NLWeb reference stack as a native retriever, creating a scalable and flexible path for deploying NLWeb instances using open-source infrastructure. Retrieval Powered by pgvector NLWeb leverages pgvector, a PostgreSQL extension for efficient vector similarity search, to handle natural language retrieval at scale. By integrating pgvector into the NLWeb stack, teams can eliminate the need for external vector databases. Web data stored in PostgreSQL becomes immediately searchable and usable for NLWeb experiences, streamlining infrastructure and enhancing security. PostgreSQL's robust governance features and wide adoption align with NLWeb’s mission to enable conversational AI for any website or content platform. With pgvector retrieval built in, developers can confidently launch NLWeb instances on their own databases no additional infrastructure required. Implementation example We are going to use NLWeb and Postgres, to create a conversational AI app and MCP server that will let us chat with content from the Talking Postgres with Claire Giordano Podcast! Prerequisites An active Azure account. Enable and configure the pg_vector extensions. Create an Azure AI Foundry project. Deploy models gpt-4.1, gpt-4.1-mini and text-embedding-3-small. Install Visual Studio Code. Install the Python extension. Install Python 3.11.x. Install the Azure CLI (latest version). Getting started All the code and sample datasets are available in this GitHub repository. Step 1: Setup NLWeb Server 1. Clone or download the code from the repo. git clone https://github.com/microsoft/NLWeb cd NLWeb 2. Open a terminal to create a virtual python environment and activate it. python -m venv myenv source myenv/bin/activate # Or on Windows: myenv\Scripts\activate 3. Go to the 'code/python' folder in NLWeb to install the dependencies. cd code/python pip install -r requirements.txt 4. Go to the project root folder in NLWeb and copy the .env.template file to a new .env file cd ../../ cp .env.template .env 5. In the .env file, update the API key you will use for your LLM endpoint of choice and update the Postgres connection string. For example: AZURE_OPENAI_ENDPOINT="https://TODO.openai.azure.com/" AZURE_OPENAI_API_KEY="<TODO>" # If using Postgres connection string POSTGRES_CONNECTION_STRING="postgresql://<HOST>:<PORT>/<DATABASE>?user=<USERNAME>&sslmode=require" POSTGRES_PASSWORD="<PASSWORD>" 6. Update your config files (located in the config folder) to make sure your preferred providers match your .env file. There are three files that may need changes. config_llm.yaml: Update the first line to the LLM provider you set in the .env file. By default it is Azure OpenAI. You can also adjust the models you call here by updating the models noted. By default, we are assuming 4.1 and 4.1-mini. config_embedding.yaml: Update the first line to your preferred embedding provider. By default it is Azure OpenAI, using text-embedding-3-small. config_retrieval.yaml: Update the first line to postgres. You should update write_endpoint to postgres and You should update postgres retrieval endpoint is enabled to 'true' in the following list of possible endpoints. Step 2: Initialize Postgres Server Go to the 'code/python/misc folder in NLWeb to run Postgres initializer. NOTE: If you are using Azure Postgres Flexible server make sure you have `vector` extension allow-listed and make sure the database has the vector extension enabled, cd code/python/misc python postgres_load.py Step 3: Ingest Data from Talk Postgres Podcast Now we will load some data in our local vector database to test with. We've listed a few RSS feeds you can choose from below. Go to the 'code/python folder in NLWeb and run the command. The format of the command is as follows (make sure you are still in the 'python' folder when you run this): python -m data_loading.db_load <RSS URL> <site-name> Talking Postgres with Claire Giordano Podcast: python -m data_loading.db_load https://feeds.transistor.fm/talkingpostgres Talking-Postgres (Optional) You can check the documents table in your Postgres database and verify the table looks like the one below. To verify all the data from the website was uploaded. Test NLWeb Server Start your NLWeb server (again from the 'python' folder): python app-file.py Go to http://localhost:8000/ Start ask questions about the Talking Postgres with Claire Giordano Podcast, you may try different modes. Trying List Mode: Sample Prompt: “I want to listen to something that talks about the advances in vector search such as DiskANN” Trying Generate Mode Sample Prompt: “What did Shireesh Thota say about the future of Postgres?” Running NLWeb with MCP 1. If you do not already have it, install MCP in your venv: pip install mcp 2. Next, configure your Claude MCP server. If you don’t have the config file already, you can create the file at the following locations: macOS: ~/Library/Application Support/Claude/claude_desktop_config.json Windows: %APPDATA%\Claude\claude_desktop_config.json The default MCP JSON file needs to be modified as shown below: macOS Example Configuration { “mcpServers”: { “ask_nlw”: { “command”: “/Users/yourname/NLWeb/myenv/bin/python”, “args”: [ “/Users/yourname/NLWeb/code/chatbot_interface.py”, “—server”, “http://localhost:8000”, “—endpoint”, “/mcp” ], “cwd”: “/Users/yourname/NLWeb/code” } } } Windows Example Configuration { “mcpServers”: { “ask_nlw”: { “command”: “C:\\Users\\yourusername\\NLWeb\\myenv\\Scripts\\python”, “args”: [ “C:\\Users\\yourusername\\NLWeb\\code\\chatbot_interface.py”, “—server”, “http://localhost:8000”, “—endpoint”, “/mcp” ], “cwd”: “C:\\Users\\yourusername\\NLWeb\\code” } } } Note: For Windows paths, you need to use double backslashes (\\) to escape the backslash character in JSON. 3. Go to the 'code/python’ folder in NLWeb and run the command. Enter your virtual environment and start your NLWeb local server. Make sure it is configured to access the data you would like to ask about from Claude. # On macOS source ../myenv/bin/activate python app-file.py # On Windows ..\myenv\Scripts\activate python app-file.py 4. Open Claude Desktop. It should ask you to trust the 'ask_nlw' external connection if it is configured correctly. After clicking yes and the welcome page appears, you should see 'ask_nlw' in the bottom right '+' options. Select it to start a query. 5. To query NLWeb, just type 'ask_nlw' in your prompt to Claude. You'll notice that you also get the full JSON script for your results. Remember, you must have your local NLWeb server started to use this option. Learn More Vector Store in Azure Postgres Flexible Server Generative AI in Azure Postgres Flexible Server NLWeb GitHub repo includes: A reference server for handling natural language queries PGvector integrationJS AI Build-a-thon: Project Showcase
In the JS AI Build-a-thon, quest 9 was all about shipping faster, smarter, and more confidently. This quest challenged developers to skip the boilerplate and focus on what really matters, solving real-world problems with production-ready AI templates and cloud-native tools. And wow, did our builders deliver, with a massive 28 projects. From personalized chatbots to full-stack RAG applications, participants used the power of the Azure Developer CLI (azd) and robust templates to turn ideas into fully deployed AI solutions on Azure, in record time. What This Quest Was About Quest 9 focused on empowering developers to: Build AI apps faster with production-ready templates Use Azure Developer CLI (azd) to deploy with just a few commands Leverage Infrastructure-as-Code (IaC) to provision everything from databases to APIs Follow best practices out of the box — scalable, secure, and maintainable Participants explored a curated gallery of templates and learned how to adapt them to their unique use cases. No Azure experience? No problem. azd made setup, deployment, and teardown as simple as azd up. 🥁And the winner is..... With the massive votes from the community, the winning project as AI Academic advisor by Aryanjstar AI Career Navigator - Your Personal AI Career Coach by Aryanjstar [Project Submission] AI Career Navigator - Your Personal AI Career Coach · Issue #47 · Azure-Samples/JS-AI-Build-a-thon Aryan, a Troop Leader of the Geo Cyber Study Jam, created the AI Career Navigator to address common challenges faced by developers in the tech job market, such as unclear skill paths, resume uncertainty, and interview anxiety. Built using the Azure Search OpenAI Demo template, the tool offers features like resume-job description matching, skill gap analysis, and dynamic interview preparation. The project leverages a robust RAG setup and Azure integration to deliver a scalable, AI-powered solution for career planning. 🥉 Other featured projects: Deepmine-sentinel by Josephat-Onkoba [Project Submission] Deepmine-Sentinel · Issue #29 · Azure-Samples/JS-AI-Build-a-thon DeepMine Sentinel AI is an intelligent safety assistant designed to tackle the urgent risks facing workers in the mining industry, one of the most hazardous sectors globally. Built by customizing the “Get Started with Chat” Azure template, this solution offers real-time safety guidance and monitoring to prevent life-threatening incidents like cave-ins, toxic gas exposure, and equipment accidents. In regions where access to safety expertise is limited, DeepMine Sentinel bridges the gap by delivering instant, AI-powered support underground, ensuring workers can access critical information and protocols when they need it most. With a focus on accessibility, real-world impact, and life-saving potential, this project demonstrates how AI can be a powerful force for good in high-risk environments. PetPal - Your AI Pet Care Assistant by kelcho-spense [Project Submission] PetPal - Your AI Pet Care Assistant · Issue #70 · Azure-Samples/JS-AI-Build-a-thon PetPal is an AI-powered pet care assistant designed to support pet owners, especially first-timers, by offering instant, reliable answers to common pet-related concerns. From health and nutrition advice to emergency support and behavioral training, PetPal uses a serverless architecture powered by LangChain.js, Azure OpenAI, and Retrieval-Augmented Generation (RAG) to deliver accurate, context-aware responses. The app features a warm, pet-themed interface built with Lit and TypeScript, and includes thoughtful customizations like pet profile management, personalized chat history, and species-specific guidance. With backend services hosted on Azure Functions and data stored in Azure Cosmos DB, PetPal is production-ready, scalable, and focused on reducing anxiety while promoting responsible and informed pet ownership. MLSA LearnBot by Shunlexxi [Project Submission] MLSA LearnBot · Issue #67 · Azure-Samples/JS-AI-Build-a-thon Navigating the Microsoft Learn Student Ambassadors program can be overwhelming. To solve this, a student built an Intelligent Chatbot Q&A App using a Microsoft azd template, transforming a generic AI chatbot into a tailored assistant for Student Ambassadors and aspiring members. By integrating essential documentation, FAQs, and natural language support with Azure services like App Service, AI Search, and OpenAI, this tool empowers students to get instant, reliable answers and navigate their roles with ease. The front end was customized to reflect a student-friendly brand, and deployment was simplified using azd for a seamless, production-ready experience. You okay? Meet Vish AI, your mental health companion by ToshikSoni [Project Submission] You okay? Meet Vish AI, your mental health companion · Issue #38 · Azure-Samples/JS-AI-Build-a-thon Vish.AI is an empathetic GenAI companion designed to support emotional well-being, especially for individuals facing depression, loneliness, and mental burnout. Built using Azure’s AI Chat RAG template and enhanced with LangChain for conversational memory, the assistant offers a deeply personalized experience that remembers past interactions and responds with both emotional intelligence and informed support. By integrating a curated collection of resources on mental health into its RAG system, Vish.AI provides meaningful guidance and a comforting presence, available anytime, anywhere. Created to bridge the gap for those who may not feel comfortable opening up to friends or family, this project combines AI with a human touch to offer always-accessible care, demonstrating how thoughtful technology can help make life a little lighter for those quietly struggling. Want to Catch Up? If you missed the Build-a-thon or want to explore other quests (there are 9!), check them out here: 👉 GitHub - Azure-Samples/JS-AI-Build-a-thon If you want to catch up with how the challenge went and how you can get started, check out 👉JS AI Build‑a‑thon: Wrapping Up an Epic June 2025! | Microsoft Community Hub Join the Community The conversation isn’t over. The Quests are now self-paced. We’re keeping the momentum going over on Discord in the #js-ai-build-a-thon channel. Drop your questions, showcase your builds, or just come hang out with other builders. 👉 Join the community on Join the Azure AI Foundry Discord Server! Additional Resources 🔗 Microsoft for JavaScript developers 📚 Generative AI for Beginners with JavaScriptBuild AI-Ready Apps and Agents with PostgreSQL on Azure
As developers, we’re constantly looking for ways to build smarter, faster, and more scalable applications. The Microsoft Reactor series, Build AI apps with Azure Database for PostgreSQL, is a four-part livestream experience designed to help you do just that—by combining the power of PostgreSQL with Azure’s AI capabilities. Dive into the world of AI apps and agents with Azure Database for PostgreSQL in this engaging video series—your ideal starting point for building intelligent solutions and improving your workflow. Get ready to explore the fundamentals of AI and discover how vector support in databases can elevate your applications. Uncover how innovative tools like the Visual Studio Code extension for PostgreSQL and GitHub Copilot can make your database work faster and more efficient. You'll also see how to create intelligent apps and AI agents using frameworks such as LangChain and Semantic Kernel. Why This Series Matters PostgreSQL is already a favorite among developers for its flexibility and open-source strength. But when paired with Azure’s AI services, it becomes a launchpad for intelligent applications. This series walks you through how to: Orchestrate AI agents using PostgreSQL as a foundation. Enhance semantic search with vector support and indexes like DiskANN. Integrate Azure AI services to enrich your data and user experiences. Boost productivity with tools like the Visual Studio Code PostgreSQL extension and GitHub Copilot. What You'll Learn Each session is packed with practical insights: Episode 1: Laying the foundation: AI-powered apps and agents with Azure Database for PostgreSQL We introduce key AI concepts, setting the stage for a deeper understanding of Large Language Models (LLMs) and its applications, we will explore the capabilities of Azure Database for PostgreSQL, focusing on how its vector support enables advanced semantic search through technologies like DiskANN indexes. We'll also discuss the Azure AI extension, which brings powerful AI features to your data projects, helping you enrich your applications with enhanced search relevance and intelligent insights, and provide a solid foundation for leveraging these tools in your own solutions. Register here Episode 2: Accelerate your data and AI tasks with the VS Code extension for PostgreSQL and GitHub Copilot This talk will delve into how the Visual Studio Code extension for PostgreSQL can streamline your database management, while GitHub Copilot's AI-powered assistance can boost your productivity. Learn how to seamlessly integrate these tools to enhance your workflow, automate repetitive tasks, and write efficient code faster. Whether you're a developer, data scientist, or database administrator, this session will provide you with practical insights and techniques to elevate your data and AI projects. Join us to learn how to effectively use these advanced tools and take your data skills to the next level. Register here Episode 3: Build your own AI copilot for financial apps with PostgreSQL Join us to discover how to transform traditional financial applications into intelligent, AI-powered solutions with Azure Database for PostgreSQL. In this hands-on session, you'll learn to integrate generative AI for high-quality responses to financial queries using PDF-based Statements of Work and invoices, perform AI-driven data validation, apply the Azure AI extension, implement vector search with DiskANN indexes, enhance results with semantic re-ranking, use the LangChain framework, and leverage GraphRAG on Azure Database for PostgreSQL. By the end, you’ll have gained practical skills to build end-to-end AI-driven applications using your own data and projects. Register here Episode 4: Build advanced AI Agents with PostgreSQL Using a sample dataset of legal cases, we’ll show how AI technologies empower intelligent agents to provide high-quality answers to legal queries. In this session, you’ll learn to build an advanced AI agent with Azure Database for PostgreSQL, integrating generative AI for enhanced data validation, retrieval-augmented generation (RAG), semantic re-ranking, Semantic Kernel, and GraphRAG via the Apache AGE Graph extension. This practical demonstration offers insights into developing robust, intelligent solutions using your own data. Register here Join us for an inspiring and hands-on experience—don’t miss out! Get the full series details and register now: https://aka.ms/postgres-ai-reactor-series