rag
77 Topics📢 Agent Loop Ignite Update - New Set of AI Features Arrive in Public Preview
Today at Ignite, we announced the General Availability of Agent Loop in Logic Apps Standard—bringing production-ready agentic automation to every customer. But GA is just the beginning. We’re also releasing a broad set of new and powerful AI-first capabilities in Public Preview that dramatically expand what developers can build: run agents in the Consumption SKU ,bring your own models through APIM AI Gateway, call any tool through MCP, deploy agents directly into Teams, secure RAG with document-level permissions, onboard with Okta, and build in a completely redesigned workflow designer. With these preview features layered on top of GA, customers can build AI applications that bring together secure tool calling, user identity, governance, observability, and integration with their existing systems—whether they’re running in Standard, Consumption, or the Microsoft 365 ecosystem. Here’s a closer look at the new capabilities now available in Public Preview. Public Preview of Agentic Workflows in Consumption SKU Agent Loop is now available in Azure Logic Apps Consumption, bringing autonomous and conversational AI agents to everyone through a fully serverless, pay-as-you-go experience. You can now turn any workflow into an intelligent workflow using the agent loop action—without provisioning infrastructure or managing AI models. This release provides instant onboarding, simple authentication, and a frictionless entry point for building agentic automation. Customers can also tap into Logic Apps’ ecosystem of 1,400+ connectors for tool calling and system integrations. This update makes AI-powered automation accessible for rapid prototyping while still offering a clear path to scale and production-ready deployments in Logic Apps Standard, including BYOM, VNET integration, and enterprise-grade controls. Preview limitations include limited regions, no VS Code local development, and no nested agents or MCP tools yet. Read more about this in our announcement blog! Bring your Own Model We’re excited to introduce Bring Your Own Model (BYOM) support in Agent Loop for Logic Apps Standard - making it possible to use any AI model in your agentic workflows from Foundry, and even on-prem or private cloud models. The key highlight of this feature is the deep integration with the Azure API Management (APIM) AI Gateway, which now serves as the control plane for how Agent Loop connects to models. Instead of wiring agents directly to individual endpoints, AI Gateway creates a single, governed interface that manages authentication, keys, rate limits, and quotas in one place. It provides built-in monitoring, logging, and observability, giving you full visibility into every request. It also ensures a consistent API shape for model interactions, so your workflows remain stable even as backends evolve. With AI Gateway in front, you can test, upgrade, and refine your model configuration without changing your Logic Apps, making model management safer, more predictable, and easier to operate at scale. Beyond AI Gateway, Agent Loop also supports: Direct external model integration when you want lightweight, point-to-point access to a third-party model API. Local/VNET model integration for on-prem, private cloud, or custom fine-tuned models that require strict data residency and private networking. Together, these capabilities let you treat the model as a pluggable component - start with the model you have today, bring in specialized or cost-optimized models as needed, and maintain enterprise-grade governance, security, and observability throughout. This makes Logic Apps one of the most flexible platforms for building model-agnostic, production-ready AI agent workflows. Ready to try this out? Go to http://aka.ms/agentloop/byom to learn more and get started. MCP support for Agent Loop in Logic Apps Standard Agent Loop in Azure Logic Apps Standard now supports the Model Context Protocol (MCP), enabling agents to discover and call external tools through an open, standardized interface. This brings powerful, flexible tool extensibility to both conversational and autonomous agents. Agent Loop offers three ways to bring MCP tools into your workflows: Bring Your Own MCP connector – Point to any external MCP server using its URL and credentials, instantly surfacing its published tools in your agent. Managed MCP connector – Access Azure-hosted MCP servers through the familiar managed connector experience, with shared connections and Azure-managed catalogs. Custom MCP connector – Build and publish your own OpenAPI-based MCP connector to expose private or tenant-scoped MCP servers. Idea for reusability of MCPs across organization. Managed and Custom MCP connectors support on-behalf-of (OBO) authentication, allowing agents to call MCP tools using the end user’s identity. This provides user-context-aware, permission-sensitive tool access across your intelligent workflows. Want to learn more – check out our announcement blog and how-to documents. Deploy Conversational Agents to Teams/M365 Workflows with conversational agents in Logic Apps can now be deployed directly into Microsoft Teams, so your agentic workflows show up where your users already live all day. Instead of going to a separate app or portal, employees can ask the agent questions, kick off approvals, check order or incident status, or look up internal policies right from a Teams chat or channel. The agent becomes just another teammate in the conversation—joining stand-ups, project chats, and support rooms as a first-class participant. Because the same Logic Apps agent can also be wired into other Microsoft 365 experiences that speak to Bots and web endpoints, this opens the door to a consistent and personalized “organization copilot” that follows users across the M365 ecosystem: Teams for chat, meetings, and channels today, and additional surfaces over time. Azure Bot Service and your proxy handle identity, tokens, and routing, while Logic Apps takes care of reasoning, tools, and back-end systems. The result is an agent that feels native to Teams and Microsoft 365—secure, governed, and always just one @mention away. Ready to bring your agentic workflows into Teams? Here’s how to get started. Secure Knowledge Retrieval for AI Agents in Logic Apps We’ve added native document-level authorization to Agent Loop by integrating Azure AI Search ACLs. This ensures AI agents only retrieve information the requesting user is permitted to access—making RAG workflows secure, compliant, and permission-aware by default. Documents are indexed with user or group permissions, and Agent Loop automatically applies those permissions during search using the caller’s principal ID or group memberships. Only authorized documents reach the LLM, preventing accidental exposure of sensitive data. This simplifies development, removes custom security code, and allows a single agent to safely serve users with different access levels—whether for HR, IT, or internal knowledge assistants. Here is our blogpost to learn more about this feature. Okta Agent Loop now supports Okta as an identity provider for conversational agents, alongside Microsoft Entra ID. This makes it easy for organizations using Okta for workforce identity to pass authenticated user context—including user attributes, group membership, and permissions—directly into the agent at runtime. Agents can now make user-aware decisions, enforce access rules, personalize responses, and execute tools with proper user context. This update helps enterprises adopt Agent Loop without changing their existing identity architecture and enables secure, policy-aligned AI interactions across both Okta and Entra environments. Setting up Okta as the identity provider requires a few steps and they are all explained in details here at Logic Apps Labs. Designer makeover! We’ve introduced a major redesign of the Azure Logic Apps designer, now in Public Preview for Standard workflows. This release marks the beginning of a broader modernization effort to make building, testing, and operating workflows faster, cleaner, and more intuitive. The new designer focuses on reducing friction and streamlining the development loop. You now land directly in the designer when creating a workflow, with plans to remove early decisions like stateful/stateless or agentic setup. The interface has been simplified into a single unified view, bringing together the visual canvas, code view, settings, and run history so you no longer switch between blades. A major addition is Draft Mode with auto-save, which preserves your work every few seconds without impacting production. Drafts can be tested safely and only go live when you choose to publish—without restarting the app during editing. Search has also been completely rebuilt for speed and accuracy, powered by backend indexing instead of loading thousands of connectors upfront. The designer now supports sticky notes and markdown, making it easy to document workflows directly on the canvas. Monitoring is integrated into the same page, letting you switch between runs instantly and compare draft and published results. A new hierarchical timeline view improves debugging by showing every action executed in order. This release is just the start—many more improvements and a unified designer experience across Logic Apps are on the way as we continue to iterate based on your feedback. Learn more about the designer updates in our announcement blog ! What's Next We’d love your feedback. Which capabilities should we prioritize, and what would create the biggest impact for your organization?716Views1like0Comments🎉Announcing General Availability of Agent Loop in Azure Logic Apps
Transforming Business Automation with Intelligent, Collaborative Multi-Agentic workflows! Agent Loop is now Generally Available in Azure Logic Apps Standard, turning Logic Apps platform into a complete multi-agentic automation system. Build AI agents that work alongside workflows and humans, secured with enterprise-grade identity and access controls, deployed using your existing CI/CD pipelines. Thousands of customers have already built tens of thousands of agents—now you can take them to production with confidence. Get Started | Workshop | Demo Videos | Ignite 2025 Session | After an incredible journey since we introduced Agent Loop at Build earlier this year, we're thrilled to announce that Agent Loop is now generally available in Azure Logic Apps. This milestone represents more than just a feature release—it's the culmination of learnings from thousands of customers who have been pushing the boundaries of what's possible with agentic workflows. Agent Loop transforms Azure Logic Apps into a complete multi-agentic business process automation platform, where AI agents, automated workflows, and human expertise collaborate seamlessly to solve complex business challenges. With GA, we're delivering enterprise-grade capabilities that organizations need to confidently deploy intelligent automation at scale. The Journey to GA: Proven by Customers, Built for Production Since our preview launch at Build, the response has been extraordinary. Thousands of customers—from innovative startups to Fortune 500 enterprises—have embraced Agent Loop, building thousands of active agents that have collectively processed billions of tokens every month for the past six months. The growth of agents, executions, and token usage has accelerated significantly, doubling month over month. Since the launch of Conversational Agents in September, they already account for nearly 30% of all agentic workflows. Across the platform, agentic workflows now consume billions of tokens, with overall token usage increasing at nearly 3× month over month. Cyderes: 5X Faster Security Investigation Cycles Cyderes leveraged Agent Loop to automate triage and handling of security alerts, leading to faster investigation cycles and significant cost savings. "We were drowning in data—processing over 10,000 alerts daily while analysts spent more time chasing noise than connecting narratives. Agent Loop changed everything. By empowering our team to design and deploy their own AI agents through low-code orchestration, we've achieved 5X faster investigation cycles and significant cost savings, all while keeping pace with increasingly sophisticated cyber threats that now leverage AI to operate 25X faster than traditional attacks." – Eric Summers, Engineering Manager - AI & SOAR Vertex Pharmaceuticals: Hours Condensed to Minutes Vertex Pharmaceuticals unlocked knowledge trapped across dozens of systems via a team of agents. VAIDA, built with Logic Apps and Agent Loop, orchestrates multiple AI agents and helps employees find information faster, while maintaining compliance and supporting multiple languages. "We had knowledge trapped across dozens of systems—ServiceNow, documentation, training materials—and teams were spending valuable time hunting for answers. Logic Apps Agent Loop changed that. VAIDA now orchestrates multiple AI agents to summarize, search, and analyze this knowledge, then routes approvals right in Teams and Outlook. We've condensed hours into minutes while maintaining compliance and delivering content in multiple languages." – Pratik Shinde, Director, Digital Infrastructure & GenAI Platforms Where Customers Are Deploying Agent Loop Customers across industries are using Agent Loop to build AI applications that power both everyday tasks and mission-critical business processes across Healthcare, Retail, Energy, Financial Services, and beyond. These applications drive impact across a wide range of scenarios: Developer Productivity: Write code, generate unit tests, create workflows, map data between systems, automate source control, deployment and release pipelines IT Operations: Incident management, ticket and issue handling, policy review and enforcement, triage, resource management, cost optimization, issue remediation Business Process Automation: Empower sales specialists, retail assistants, order processing/approval flows, and healthcare assistants for intake and scheduling Customer & Stakeholder Support: Project planning and estimation, content generation, automated communication, and streamlined customer service workflows Proven Internally at Microsoft Agent Loop is also powering Microsoft and Logic Apps team's own operations, demonstrating its versatility and real-world impact: IcM Automation Team: Transforming Microsoft's internal incident automation platform into an agent studio that leverages Logic Apps' Agent Loop, enabling teams across Microsoft to build agentic live site incident automations Logic Apps Team Use Cases: Release & Deployment Agent: Streamlines deployment and release management for the Logic Apps platform Incident Management Agent: An extension of our SRE Agent, leveraging Agent Loop to accelerate incident response and remediation Analyst Agent: Assists teams in exploring product usage and health data, generating insights directly from analytics What's Generally Available Today Core Agent Loop Capabilities (GA) Agent Loop in Logic Apps Standard SKU - Support for both Autonomous and Conversational workflows Autonomous workflows run agents automatically based on triggers and conditions Conversational workflows use A2A to enable interactive chat experiences with agents On-Behalf-Of Authentication - Per-user authentication for 1st-party and 3rd-party connectors Agent Hand-Off - Enable seamless collaboration in multi-agent workflows Python Code Interpreter - Execute Python code dynamically for data analysis and computation Nested Agent Action - Use agents as tools within other agents for sophisticated orchestration User ACLs Support - Fine-grained document access control for knowledge Exciting New Agent Loop Features in Public Preview We've also released several groundbreaking features in Public Preview: New Designer Experience - Redesigned interface optimized for building agentic workflows Agent Loop in Consumption SKU - Deploy agents in the serverless Consumption tier MCP Support - Integrate Model Context Protocol servers as tools, enabling agents to access standardized tool ecosystems AI Gateway Integration - Use Azure AI Gateway as a model source for unified governance and monitoring Teams/M365 Deployment - Deploy conversational agents directly in Microsoft Teams and Microsoft 365 Okta Identity Provider - Use Okta as the identity provider for conversational agents Here’s our Announcement Blog for these new capabilities Built on a Platform You Already Trust Azure Logic Apps is already a proven iPaaS platform with thousands of customers using it for automation – ranging from startups to 100% of Fortune 500 companies. Agent Loop doesn't create a separate "agentic workflow automation platform" you have to learn and operate. Instead, it makes Azure Logic Apps itself your agentic platform: Workflows orchestrate triggers, approvals, retries, and branching Agent Loop, powered by LLMs, handle reasoning, planning, and tool selection Humans stay in control through approvals, exceptions, and guided hand-offs Agent Loop runs inside your Logic Apps Standard environment, so you get the same benefits you already know: enterprise SLAs, VNET integration, data residency controls, hybrid hosting options, and integration with your existing deployment pipelines and governance model. Enterprise Ready - Secure, User-Aware Agents by Design Bringing agents into the enterprise only works if security and compliance are first-class. With Agent Loop in Azure Logic Apps, security is built into every layer of the stack. Per-User Actions with On-Behalf-Of (OBO) and Delegated Permissions Many agent scenarios require tools to act in the context of the signed-in user. Agent Loop supports the OAuth 2.0 On-Behalf-Of (OBO) flow so that supported connector actions can run with delegated, per-user connections rather than a broad app-only identity. That means when an agent sends mail, reads SharePoint, or updates a service desk system, it does so as the user (where supported), respecting that user's licenses, permissions, and data boundaries. This is critical for scenarios like IT operations, HR requests, and finance approvals where "who did what" must be auditable. Document-Level Security with Microsoft Entra-Based Access Control Agents should only see the content a user is entitled to see. With Azure AI Search's Entra-based document-level security, your retrieval-augmented workflows can enforce ACLs and RBAC directly in the index so that queries are automatically trimmed to documents the user has access to. Secured Chat Entry Point with Easy Auth and Entra ID The built-in chat client and your custom clients can be protected using App Service Authentication (Easy Auth) and Microsoft Entra ID, so only authorized users and apps can invoke your conversational endpoints. Together, OBO, document-level security, and Easy Auth give you end-to-end identity and access control—from the chat surface, through the agent, down to your data and systems. An Open Toolbox: Connectors, Workflows, MCP Servers, and External Agents Agent Loop inherits the full power of the Logic Apps ecosystem and more - 1,400+ connectors for SaaS, on-premises, and custom APIs Workflows and agents as tools - compose sophisticated multi-step capabilities MCP server support - integrate with the Model Context Protocol for standardized tool access (Preview) A2A protocol support - enable agent-to-agent communication across platforms Multi-model flexibility - use Azure OpenAI, Azure AI Foundry hosted models, or bring your own model on any endpoint via AI gateway You're not locked into a single vendor or model provider. Agent Loop gives you an open, extensible framework that works with your existing investments and lets you choose the right tools for each job. Run Agents Wherever You Run Logic Apps Agent Loop is native to Logic Apps Standard, so your agentic workflows run consistently across cloud, on-premises, or hybrid environments. They inherit the same deployment, scaling, and networking capabilities as your workflows, bringing adaptive, AI-driven automation to wherever your systems and data live. Getting Started with Agent Loop We're in very exciting times, and we can't wait to see our customers go to production and realize the benefits of these capabilities for their business outcomes and success. Here are some useful links to get started on your AI journey with Logic Apps! Logic Apps Labs - https://aka.ms/LALabs Workshop - https://aka.ms/la-agent-in-a-day Demos - https://aka.ms/agentloopdemos781Views3likes0Comments🤖 Agent Loop Demos 🤖
We announced the public preview of agent loop at Build 2025. Agent Loop is a new feature in Logic Apps to build AI Agents for use cases that span across industry domains and patterns. Here are some resources to learn more about them Logic Apps Labs - https://aka.ms/lalabs Agent in a day workshop - https://aka.ms/la-agent-in-a-day In this article, share with you use cases implemented in Logic Apps using agent loop and other features. This video shows a conversational agent that answers questions about Health Plans and their coverage. The demo features document ingestion of policy documents in AI Search and then retrieving them in Agent loop using natural language This video shows an autonomous agent that generates sales report. The demo features Python Code Interpreter which analyzed excel data and aggregates it for the LLM to reason on it This video shows a conversational agent that helps recruiters with candidate screening and interview scheduling. The demo features OBO (On-Behalf-Of) where agent uses tools in the security context of user. This video shows a conversational agent for a Utility company. The demo features multi agent orchestration using handoff, and a native chat client that supports multi turn conversations and streaming, and is secured via Entra for user authentication This video shows an autonomous Loan Approval Agent specifically that handles auto loans for a bank. The demo features an AI Agent that uses an Azure Open AI model, company's policies, and several tools to process loan application. For edge cases, huma in involved via Teams connector. This video shows an autonomous Product Return Agent for Fourth Coffee company. The returns are processed by agent based on company policy, and other criterions. In this case also, a human is involved when decisions are outside the agent's boundaries This video shows a commercial agent that grants credits for purchases of groceries and other products, for Northwind Stores. The Agent extracts financial information from an IBM Mainframe and an IBM i system to assess each requestor and updates the internal Northwind systems with the approved customers information. Multi-Agent scenario including both a codeful and declarative method of implementation. Note: This is pre-release functionality and is subject to change. If you are interested in further discussing Logic Apps codeful Agents, please fill out the following feedback form. Operations Agent (part 1): In this conversational agent, we will perform Logic Apps operations such as repair and resubmit to ensure our integration platform is healthy and processing transactions. To ensure of compliance we will ensure all operational activities are logged in ServiceNow. Operations Agent (part 2): In this autonomous agent, we will perform Logic Apps operations such as repair and resubmit to ensure our integration platform is healthy and processing transactions. To ensure of compliance we will ensure all operational activities are logged in ServiceNow.4KViews4likes2CommentsStaying in the flow: SleekFlow and Azure turn customer conversations into conversions
A customer adds three items to their cart but never checks out. Another asks about shipping, gets stuck waiting eight minutes, only to drop the call. A lead responds to an offer but is never followed up with in time. Each of these moments represents lost revenue, and they happen to businesses every day. SleekFlow was founded in 2019 to help companies turn those almost-lost-customer moments into connection, retention, and growth. Today we serve more than 2,000 mid-market and enterprise organizations across industries including retail and e-commerce, financial services, healthcare, travel and hospitality, telecommunications, real estate, and professional services. In total, those customers rely on SleekFlow to orchestrate more than 600,000 daily customer interactions across WhatsApp, Instagram, web chat, email, and more. Our name reflects what makes us different. Sleek is about unified, polished experiences—consolidating conversations into one intelligent, enterprise-ready platform. Flow is about orchestration—AI and human agents working together to move each conversation forward, from first inquiry to purchase to renewal. The drive for enterprise-ready agentic AI Enterprises today expect always-on, intelligent conversations—but delivering that at scale proved daunting. When we set out to build AgentFlow, our agentic AI platform, we quickly ran into familiar roadblocks: downtime that disrupted peak-hour interactions, vector search delays that hurt accuracy, and costs that ballooned under multi-tenant workloads. Development slowed from limited compatibility with other technologies, while customer onboarding stalled without clear compliance assurances. To move past these barriers, we needed a foundation that could deliver the performance, trust, and global scale enterprises demand. The platform behind the flow: How Azure powers AgentFlow We chose Azure because building AgentFlow required more than raw compute power. Chatbots built on a single-agent model often stall out. They struggle to retrieve the right context, they miss critical handoffs, and they return answers too slowly to keep a customer engaged. To fix that, we needed an ecosystem capable of supporting a team of specialized AI agents working together at enterprise scale. Azure Cosmos DB provides the backbone for memory and context, managing short-term interactions, long-term histories, and vector embeddings in containers that respond in 15–20 milliseconds. Powered by Azure AI Foundry, our agents use Azure OpenAI models within Azure AI Foundry to understand and generate responses natively in multiple languages. Whether in English, Chinese, or Portuguese, the responses feel natural and aligned with the brand. Semantic Kernel acts as the conductor, orchestrating multiple agents, each of which retrieves the necessary knowledge and context, including chat histories, transactional data, and vector embeddings, directly from Azure Cosmos DB. For example, one agent could be retrieving pricing data, another summarizing it, and a third preparing it for a human handoff. The result is not just responsiveness but accuracy. A telecom provider can resolve a billing question while surfacing an upsell opportunity in the same dialogue. A financial advisor can walk into a call with a complete dossier prepared in seconds rather than hours. A retailer can save a purchase by offering an in-stock substitute before the shopper abandons the cart. Each of these conversations is different, yet the foundation is consistent on AgentFlow. Fast, fluent, and focused: Azure keeps conversations moving Speed is the heartbeat of a good conversation. A delayed answer feels like a dropped call, and an irrelevant one breaks trust. For AgentFlow to keep customers engaged, every operation behind the scenes has to happen in milliseconds. A single interaction can involve dozens of steps. One agent pulls product information from embeddings, another checks it against structured policy data, and a third generates a concise, brand-aligned response. If any of these steps lag, the dialogue falters. On Azure, they don’t. Azure Cosmos DB manages conversational memory and agent state across dedicated containers for short-term exchanges, long-term history, and vector search. Sharded DiskANN indexing powers semantic lookups that resolve in the 15–20 millisecond range—fast enough that the customer never feels a pause. Microsoft Phi’s model Phi-4 as well as Azure OpenAI in Foundry Models like o3-mini and o4-mini, provide the reasoning, and Azure Container Apps scale elastically, so performance holds steady during event-driven bursts, such as campaign broadcasts that can push the platform from a few to thousands of conversations per minute, and during daily peak-hour surges. To support that level of responsiveness, we run Azure Container Apps on the Pay-As-You-Go consumption plan, using KEDA-based autoscaling to expand from five idle containers to more than 160 within seconds. Meanwhile, Microsoft Orleans coordinates lightweight in-memory clustering to keep conversations sleek and flowing. The results are tangible. Retrieval-augmented generation recall improved from 50 to 70 percent. Execution speed is about 50 percent faster. For SleekFlow’s customers, that means carts are recovered before they’re abandoned, leads are qualified in real time, and support inquiries move forward instead of stalling out. With Azure handling the complexity under the hood, conversations flow naturally on the surface—and that’s what keeps customers engaged. Secure enough for enterprises, human enough for customers AgentFlow was built with security-by-design as a first principle, giving businesses confidence that every interaction is private, compliant, and reliable. On Azure, every AI agent operates inside guardrails enterprises can depend on. Azure Cosmos DB enforces strict per-tenant isolation through logical partitioning, encryption, and role-based access control, ensuring chat histories, knowledge bases, and embeddings remain auditable and contained. Models deployed through Azure AI Foundry, including Azure OpenAI and Microsoft Phi, process data entirely within SleekFlow’s Azure environment and guarantees it is never used to train public models, with activity logged for transparency. And Azure’s certifications—including ISO 27001, SOC 2, and GDPR—are backed by continuous monitoring and regional data residency options, proving compliance at a global scale. But trust is more than a checklist of certifications. AgentFlow brings human-like fluency and empathy to every interaction, powered by Azure OpenAI running with high token-per-second throughput so responses feel natural in real time. Quality control isn’t left to chance. Human override workflows are orchestrated through Azure Container Apps and Azure App Service, ensuring AI agents can carry conversations confidently until they’re ready for human agents. Enterprises gain the confidence to let AI handle revenue-critical moments, knowing Azure provides the foundation and SleekFlow provides the human-centered design. Shaping the next era of conversational AI on Azure The benefits of Azure show up not only in customer conversations but also in the way our own teams work. Faster processing speeds and high token-per-second throughput reduce latency, so we spend less time debugging and more time building. Stable infrastructure minimizes downtime and troubleshooting, lowering operational costs. That same reliability and scalability have transformed the way we engineer AgentFlow. AgentFlow started as part of our monolithic system. Shipping new features used to take about a month of development and another week of heavy testing to make sure everything held together. After moving AgentFlow to a microservices architecture on Azure Container Apps, we can now deploy updates almost daily with no down time or customer impact. And this is all thanks to native support for rolling updates and blue-green deployments. This agility is what excites us most about what's ahead. With Azure as our foundation, SleekFlow is not simply keeping pace with the evolution of conversational AI—we are shaping what comes next. Every interaction we refine, every second we save, and every workflow we streamline brings us closer to our mission: keeping conversations sleek, flowing, and valuable for enterprises everywhere.167Views3likes0CommentsLogic Apps Aviators Newsletter - November 2025
In this issue: Ace Aviator of the Month News from our product group News from our community Ace Aviator of the Month Novembers’s Ace Aviator: Al Ghoniem What's your role and title? What are your responsibilities? As a Senior Integration Consultant, I design and deliver enterprise-grade integration on Microsoft Azure, primarily using Logic Apps Standard, API Management, Service Bus, Event Grid and Azure Functions. My remit covers reference architectures, “golden” templates, governance and FinOps guardrails, CI/CD automation (Bicep and YAML), and production-ready patterns for reliability, observability and cost efficiency. Alongside my technical work, I lead teams of consultants and engineers, helping them adopt standardised delivery models, mentor through code reviews and architectural walkthroughs, and ensure we deliver consistent, high-quality outcomes across projects. I also help teams apply decisioning patterns (embedded versus external rules) and integrate AI responsibly within enterprise workflows. Can you give us some insights into your day-to-day activities and what a typical day in your role looks like? Architecture and patterns: refining solution designs, sequence diagrams and rules models for new and existing integrations. Build and automation: evolving reusable Logic App Standard templates, Bicep modules and pipelines, embedding monitoring, alerts and identity-first security. Problem-solving: addressing performance tuning, transient fault handling, poison/DLQ flows and “design for reprocessing.” Leadership and enablement: mentoring consultants, facilitating technical discussions, and ensuring knowledge is shared across teams. Community and writing: publishing articles and examples to demystify real-world integration trade-offs. What motivates and inspires you to be an active member of the Aviators/Microsoft community? The community continuously turns hard-won lessons into reusable practices. Sharing patterns (and anti-patterns) saves others time and incidents, while learning from peers strengthens my own work. Microsoft’s product teams also listen closely, and seeing customer feedback directly shape the platform is genuinely rewarding. Looking back, what advice do you wish you had been given earlier that you'd now share with those looking to get into STEM/technology? Optimise for learning speed, not titles. Choose problems that stretch you and deliver in small, measurable increments. Master the fundamentals. Naming, idempotency, retries and observability are not glamorous but make systems dependable. Document everything. Diagrams, runbooks and ADRs multiply your impact. Understand trade-offs. Every decision buys something and costs something; acknowledge both sides clearly. Value collaboration over heroics. Ask questions, share knowledge and give credit freely. What has helped you grow professionally? Reusable scaffolding: creating golden templates and reference repositories that capture best practice once and reuse it everywhere. Feedback loops: leveraging telemetry, post-incident reviews and peer critique to improve. Teaching and mentoring: explaining concepts to others brings clarity and strengthens leadership. Cross-disciplinary curiosity: combining architecture, DevOps, FinOps and AI to address problems holistically. If you had a magic wand that could create a feature in Logic Apps, what would it be and why? "Stateful Sessions and Decisions” as a first-class capability: Built-in session state across multiple workflows, durable correlation and resumable orchestrations without external storage. A native decisioning activity with versioned decision tables and rule auditing (“why this rule fired”). A local-first developer experience with fast testing and contract validation for confident iteration. This would simplify complex, human-in-the-loop and event-driven scenarios, reduce custom plumbing, and make advanced orchestration patterns accessible to a wider audience. News from our product group Logic Apps Community Day 2025 Did you miss or want to catch up again on your favorite Logic Apps Community Day videos – jump back into action on this four hours long learning session, with 10 sessions from our Community Experts. And stay tuned for individual sessions being shared throughout the week. Announcing Parse & Chunk with Metadata in Logic Apps: Build Context-Aware RAG Agents New Parse & Chunk actions add metadata like page numbers and sentence completeness—perfect for context-aware document Q&A using Azure AI Search and Agent Loop. Introducing the RabbitMQ Connector (Public Preview) The new connector (Public Preview) lets you send and receive messages with RabbitMQ in Logic Apps Standard and Hybrid—ideal for scalable, reliable messaging across industries. News from our community EventGrid And Entra Auth In Logic Apps Standard Post by Riccardo Viglianisi Learn how to use Entra Auth for webhook authentication, ditch SAS tokens, and configure private endpoints with public access rules—perfect for secure, scalable integrations. Debugging XSLT Made Easy in VS Code: .NET-Based Debugging for Logic Apps Post by Daniel Jonathan A new .NET-based extension brings real debugging to XSLT for Logic Apps. Set breakpoints, step through transformations, and inspect variables—making XSLT development clear and productive. This is the 3 rd post in a 5 part series, so worth checking out the other posts too. Modifying the Logic App Azure Workbook: Custom Views for Multi Workflow Monitoring Post by Jeff Wessling Learn how to tailor dashboards with KQL, multi-workflow views, and context panes—boosting visibility, troubleshooting speed, and operational efficiency across your integrations. Azure AI Agents in Logic Apps: A Guide to Automate Decisions Post by Imashi Kinigama Discover how GPT-powered agents, created using Logic Apps Agent Loop, automate decisions, extract data, and adapt in real time. Build intelligent workflows with minimal effort—no hardcoding, just instructions and tools. How to Turn Logic App Connectors into MCP Servers (Step-by-Step Guide) Post by Stephen W. Thomas Learn how to expose connectors like Google Drive or Salesforce as MCP endpoints using Azure API Center—giving AI agents secure, real-time access to 1,400+ services directly from VS Code. Custom SAP MCP Server with Logic Apps Post by Sebastian Meyer Learn how to turn Logic Apps into AI-accessible tools using MCP. From workflow descriptions to Easy Auth setup and VS Code integration—this guide unlocks SAP automation with Copilot. How Azure Logic Apps as MCP Servers Accelerate AI Agent Development Post by Monisha S Turn 1,400+ connectors into AI tools with Logic Apps Standard. Build agents fast, integrate with legacy systems, and scale intelligent workflows across your organization. Designing Business Rules in Azure Logic Apps: When to Go Embedded vs External Post by Al Ghoniem Learn when to use Logic Apps' native Rules Engine or offload to Azure Functions with NRules or JSON RulesEngine. Discover hybrid patterns for scalable, testable decision automation. Syncing SharePoint with Azure Blob Storage using Logic Apps & Azure Functions for Azure AI Search Post by Daniel Jonathan Solve folder delete issues by tagging blobs with SharePoint metadata. Use Logic Apps and a custom Azure Function to clean up orphaned files and keep Azure AI Search in sync. Step-by-Step Guide: Building a Conversational Agent in Azure Logic Apps Post by Stephen W. Thomas Use Azure AI Foundry and Logic Apps Standard to create chatbots that shuffle cards, answer questions, and embed into websites—no code required, just smart workflows and EasyAuth. You can hide sensitive data from the Logic App run history Post by Francisco Leal Learn how to protect sensitive data like authentication tokens, credentials, and personal information in Logic App, so this data don’t appear in the run history, which could pose security and privacy risks.447Views0likes0CommentsIntroducing langchain-azure-storage: Azure Storage integrations for LangChain
We're excited to introduce langchain-azure-storage , the first official Azure Storage integration package built by Microsoft for LangChain 1.0. As part of its launch, we've built a new Azure Blob Storage document loader (currently in public preview) that improves upon prior LangChain community implementations. This new loader unifies both blob and container level access, simplifying loader integration. More importantly, it offers enhanced security through default OAuth 2.0 authentication, supports reliably loading millions to billions of documents through efficient memory utilization, and allows pluggable parsing, so you can leverage other document loaders to parse specific file formats. What are LangChain document loaders? A typical Retrieval‑Augmented Generation (RAG) pipeline follows these main steps: Collect source content (PDFs, DOCX, Markdown, CSVs) — often stored in Azure Blob Storage. Parse into text and associated metadata (i.e., represented as LangChain Document objects). Chunk + embed those documents and store in a vector store (e.g., Azure AI Search, Postgres pgvector, etc.). At query time, retrieve the most relevant chunks and feed them to an LLM as grounded context. LangChain document loaders make steps 1–2 turnkey and consistent so the rest of the stack (splitters, vector stores, retrievers) “just works”. See this LangChain RAG tutorial for a full example of these steps when building a RAG application in LangChain. How can the Azure Blob Storage document loader help? The langchain-azure-storage package offers the AzureBlobStorageLoader , a document loader that simplifies retrieving documents stored in Azure Blob Storage for use in a LangChain RAG application. Key benefits of the AzureBlobStorageLoader include: Flexible loading of Azure Storage blobs to LangChain Document objects. You can load blobs as documents from an entire container, a specific prefix within a container, or by blob names. Each document loaded corresponds 1:1 to a blob in the container. Lazy loading support for improved memory efficiency when dealing with large document sets. Documents can now be loaded one-at-a-time as you iterate over them instead of all at once. Automatically uses DefaultAzureCredential to enable seamless OAuth 2.0 authentication across various environments, from local development to Azure-hosted services. You can also explicitly pass your own credential (e.g., ManagedIdentityCredential , SAS token). Pluggable parsing. Easily customize how documents are parsed by providing your own LangChain document loader to parse downloaded blob content. Using the Azure Blob Storage document loader Installation To install the langchain-azure-storage package, run: pip install langchain-azure-storage Loading documents from a container To load all blobs from an Azure Blob Storage container as LangChain Document objects, instantiate the AzureBlobStorageLoader with the Azure Storage account URL and container name: from langchain_azure_storage.document_loaders import AzureBlobStorageLoader loader = AzureBlobStorageLoader( "https://<your-storage-account>.blob.core.windows.net/", "<your-container-name>" ) # lazy_load() yields one Document per blob for all blobs in the container for doc in loader.lazy_load(): print(doc.metadata["source"]) # The "source" metadata contains the full URL of the blob print(doc.page_content) # The page_content contains the blob's content decoded as UTF-8 text Loading documents by blob names To only load specific blobs as LangChain Document objects, you can additionally provide a list of blob names: from langchain_azure_storage.document_loaders import AzureBlobStorageLoader loader = AzureBlobStorageLoader( "https://<your-storage-account>.blob.core.windows.net/", "<your-container-name>", ["<blob-name-1>", "<blob-name-2>"] ) # lazy_load() yields one Document per blob for only the specified blobs for doc in loader.lazy_load(): print(doc.metadata["source"]) # The "source" metadata contains the full URL of the blob print(doc.page_content) # The page_content contains the blob's content decoded as UTF-8 text Pluggable parsing By default, loaded Document objects contain the blob's UTF-8 decoded content. To parse non-UTF-8 content (e.g., PDFs, DOCX, etc.) or chunk blob content into smaller documents, provide a LangChain document loader via the loader_factory parameter. When loader_factory is provided, the AzureBlobStorageLoader processes each blob with the following steps: Downloads the blob to a new temporary file Passes the temporary file path to the loader_factory callable to instantiate a document loader Uses that loader to parse the file and yield Document objects Cleans up the temporary file For example, below shows parsing PDF documents with the PyPDFLoader from the langchain-community package: from langchain_azure_storage.document_loaders import AzureBlobStorageLoader from langchain_community.document_loaders import PyPDFLoader # Requires langchain-community and pypdf packages loader = AzureBlobStorageLoader( "https://<your-storage-account>.blob.core.windows.net/", "<your-container-name>", prefix="pdfs/", # Only load blobs that start with "pdfs/" loader_factory=PyPDFLoader # PyPDFLoader will parse each blob as a PDF ) # Each blob is downloaded to a temporary file and parsed by PyPDFLoader instance for doc in loader.lazy_load(): print(doc.page_content) # Content parsed by PyPDFLoader (yields one Document per page in the PDF) This file path-based interface allows you to use any LangChain document loader that accepts a local file path as input, giving you access to a wide range of parsers for different file formats. Migrating from community document loaders to langchain-azure-storage If you're currently using AzureBlobStorageContainerLoader or AzureBlobStorageFileLoader from the langchain-community package, the new AzureBlobStorageLoader provides an improved alternative. This section provides step-by-step guidance for migrating to the new loader. Steps to migrate To migrate to the new Azure Storage document loader, make the following changes: Depend on the langchain-azure-storage package Update import statements from langchain_community.document_loaders to langchain_azure_storage.document_loaders . Change class names from AzureBlobStorageFileLoader and AzureBlobStorageContainerLoader to AzureBlobStorageLoader . Update document loader constructor calls to: Use an account URL instead of a connection string. Specify UnstructuredLoader as the loader_factory to continue to use Unstructured for parsing documents. Enable Microsoft Entra ID authentication in environment (e.g., run az login or configure managed identity) instead of using connection string authentication. Migration samples Below shows code snippets of what usage patterns look like before and after migrating from langchain-community to langchain-azure-storage : Before migration from langchain_community.document_loaders import AzureBlobStorageContainerLoader, AzureBlobStorageFileLoader container_loader = AzureBlobStorageContainerLoader( "DefaultEndpointsProtocol=https;AccountName=<account>;AccountKey=<account-key>;EndpointSuffix=core.windows.net", "<container>", ) file_loader = AzureBlobStorageFileLoader( "DefaultEndpointsProtocol=https;AccountName=<account>;AccountKey=<account-key>;EndpointSuffix=core.windows.net", "<container>", "<blob>" ) After migration from langchain_azure_storage.document_loaders import AzureBlobStorageLoader from langchain_unstructured import UnstructuredLoader # Requires langchain-unstructured and unstructured packages container_loader = AzureBlobStorageLoader( "https://<account>.blob.core.windows.net", "<container>", loader_factory=UnstructuredLoader # Only needed if continuing to use Unstructured for parsing ) file_loader = AzureBlobStorageLoader( "https://<account>.blob.core.windows.net", "<container>", "<blob>", loader_factory=UnstructuredLoader # Only needed if continuing to use Unstructured for parsing ) What's next? We're excited for you to try the new Azure Blob Storage document loader and would love to hear your feedback! Here are some ways you can help shape the future of langchain-azure-storage : Show support for interface stabilization - The document loader is currently in public preview and the interface may change in future versions based on feedback. If you'd like to see the current interface marked as stable, upvote the proposal PR to show your support. Report issues or suggest improvements - Found a bug or have an idea to make the document loaders better? File an issue on our GitHub repository. Propose new LangChain integrations - Interested in other ways to use Azure Storage with LangChain (e.g., checkpointing for agents, persistent memory stores, retriever implementations)? Create a feature request or write to us to let us know. Your input is invaluable in making langchain-azure-storage better for the entire community! Resources langchain-azure GitHub repository langchain-azure-storage PyPI package AzureBlobStorageLoader usage guide AzureBlobStorageLoader documentation referenceLevel up your Python + AI skills with our complete series
We've just wrapped up our live series on Python + AI, a comprehensive nine-part journey diving deep into how to use generative AI models from Python. The series introduced multiple types of models, including LLMs, embedding models, and vision models. We dug into popular techniques like RAG, tool calling, and structured outputs. We assessed AI quality and safety using automated evaluations and red-teaming. Finally, we developed AI agents using popular Python agents frameworks and explored the new Model Context Protocol (MCP). To help you apply what you've learned, all of our code examples work with GitHub Models, a service that provides free models to every GitHub account holder for experimentation and education. Even if you missed the live series, you can still access all the material using the links below! If you're an instructor, feel free to use the slides and code examples in your own classes. If you're a Spanish speaker, check out the Spanish version of the series. Python + AI: Large Language Models 📺 Watch recording In this session, we explore Large Language Models (LLMs), the models that power ChatGPT and GitHub Copilot. We use Python to interact with LLMs using popular packages like the OpenAI SDK and LangChain. We experiment with prompt engineering and few-shot examples to improve outputs. We also demonstrate how to build a full-stack app powered by LLMs and explain the importance of concurrency and streaming for user-facing AI apps. Slides for this session Code repository with examples: python-openai-demos Python + AI: Vector embeddings 📺 Watch recording In our second session, we dive into a different type of model: the vector embedding model. A vector embedding is a way to encode text or images as an array of floating-point numbers. Vector embeddings enable similarity search across many types of content. In this session, we explore different vector embedding models, such as the OpenAI text-embedding-3 series, through both visualizations and Python code. We compare distance metrics, use quantization to reduce vector size, and experiment with multimodal embedding models. Slides for this session Code repository with examples: vector-embedding-demos Python + AI: Retrieval Augmented Generation 📺 Watch recording In our third session, we explore one of the most popular techniques used with LLMs: Retrieval Augmented Generation. RAG is an approach that provides context to the LLM, enabling it to deliver well-grounded answers for a particular domain. The RAG approach works with many types of data sources, including CSVs, webpages, documents, and databases. In this session, we walk through RAG flows in Python, starting with a simple flow and culminating in a full-stack RAG application based on Azure AI Search. Slides for this session Code repository with examples: python-openai-demos Python + AI: Vision models 📺 Watch recording Our fourth session is all about vision models! Vision models are LLMs that can accept both text and images, such as GPT-4o and GPT-4o mini. You can use these models for image captioning, data extraction, question answering, classification, and more! We use Python to send images to vision models, build a basic chat-with-images app, and create a multimodal search engine. Slides for this session Code repository with examples: openai-chat-vision-quickstart Python + AI: Structured outputs 📺 Watch recording In our fifth session, we discover how to get LLMs to output structured responses that adhere to a schema. In Python, all you need to do is define a Pydantic BaseModel to get validated output that perfectly meets your needs. We focus on the structured outputs mode available in OpenAI models, but you can use similar techniques with other model providers. Our examples demonstrate the many ways you can use structured responses, such as entity extraction, classification, and agentic workflows. Slides for this session Code repository with examples: python-openai-demos Python + AI: Quality and safety 📺 Watch recording This session covers a crucial topic: how to use AI safely and how to evaluate the quality of AI outputs. There are multiple mitigation layers when working with LLMs: the model itself, a safety system on top, the prompting and context, and the application user experience. We focus on Azure tools that make it easier to deploy safe AI systems into production. We demonstrate how to configure the Azure AI Content Safety system when working with Azure AI models and how to handle errors in Python code. Then we use the Azure AI Evaluation SDK to evaluate the safety and quality of output from your LLM. Slides for this session Code repository with examples: ai-quality-safety-demos Python + AI: Tool calling 📺 Watch recording In the final part of the series, we focus on the technologies needed to build AI agents, starting with the foundation: tool calling (also known as function calling). We define tool call specifications using both JSON schema and Python function definitions, then send these definitions to the LLM. We demonstrate how to properly handle tool call responses from LLMs, enable parallel tool calling, and iterate over multiple tool calls. Understanding tool calling is absolutely essential before diving into agents, so don't skip over this foundational session. Slides for this session Code repository with examples: python-openai-demos Python + AI: Agents 📺 Watch recording In the penultimate session, we build AI agents! We use Python AI agent frameworks such as the new agent-framework from Microsoft and the popular LangGraph framework. Our agents start simple and then increase in complexity, demonstrating different architectures such as multiple tools, supervisor patterns, graphs, and human-in-the-loop workflows. Slides for this session Code repository with examples: python-ai-agent-frameworks-demos Python + AI: Model Context Protocol 📺 Watch recording In the final session, we dive into the hottest technology of 2025: MCP (Model Context Protocol). This open protocol makes it easy to extend AI agents and chatbots with custom functionality, making them more powerful and flexible. We demonstrate how to use the Python FastMCP SDK to build an MCP server running locally and consume that server from chatbots like GitHub Copilot. Then we build our own MCP client to consume the server. Finally, we discover how easy it is to connect AI agent frameworks like LangGraph and Microsoft agent-framework to MCP servers. With great power comes great responsibility, so we briefly discuss the security risks that come with MCP, both as a user and as a developer. Slides for this session Code repository with examples: python-mcp-demo1.2KViews0likes0Comments¡Curso oficial y gratuito de GenAI y Python! 🚀
ÂżQuieres aprender a usar modelos de IA generativa en tus aplicaciones de Python?Estamos organizando una serie de nueve transmisiones en vivo, en inglĂ©s y español, totalmente dedicadas a la IA generativa. Vamos a cubrir modelos de lenguaje (LLMs), modelos de embeddings, modelos de visiĂłn, y tambiĂ©n tĂ©cnicas como RAG, function calling y structured outputs. Además, te mostraremos cĂłmo construir Agentes y servidores MCP, y hablaremos sobre seguridad en IA y evaluaciones, para asegurarnos de que tus modelos y aplicaciones generen resultados seguros. đź”— RegĂstrate para toda la serie. Además de las transmisiones en vivo, puedes unirte a nuestras office hours semanales en el AI Foundry Discord de para hacer preguntas que no se respondan durante el chat. ¡Nos vemos en los streams! 👋🏻 Here’s your HTML converted into clean, readable text format (perfect for a newsletter, blog post, or social media caption): Modelos de Lenguaje đź“… 7 de octubre, 2025 | 10:00 PM - 11:00 PM (UTC) đź”— RegĂstrate para la transmisiĂłn en Reactor ¡Únete a la primera sesiĂłn de nuestra serie de Python + IA! En esta sesiĂłn, hablaremos sobre los Modelos de Lenguaje (LLMs), los modelos que impulsan ChatGPT y GitHub Copilot. Usaremos Python para interactuar con LLMs utilizando paquetes como el SDK de OpenAI y Langchain. Experimentaremos con prompt engineering y ejemplos few-shot para mejorar los resultados. TambiĂ©n construiremos una aplicaciĂłn full stack impulsada por LLMs y explicaremos la importancia de la concurrencia y el streaming en apps de IA orientadas al usuario. 👉 Si querĂ©s seguir los ejemplos en vivo, asegurate de tener una cuenta de GitHub. Embeddings Vectoriales đź“… 8 de octubre, 2025 | 10:00 PM - 11:00 PM (UTC) đź”— RegĂstrate para la transmisiĂłn en Reactor En la segunda sesiĂłn de Python + IA, exploraremos los embeddings vectoriales, una forma de codificar texto o imágenes como arrays de nĂşmeros decimales. Estos modelos permiten realizar bĂşsquedas por similitud en distintos tipos de contenido. Usaremos modelos como la serie text-embedding-3 de OpenAI, visualizaremos resultados en Python y compararemos mĂ©tricas de distancia. TambiĂ©n veremos cĂłmo aplicar cuantizaciĂłn y cĂłmo usar modelos multimodales de embedding. 👉 Si querĂ©s seguir los ejemplos en vivo, asegurate de tener una cuenta de GitHub. RecuperaciĂłn-Aumentada GeneraciĂłn (RAG) đź“… 9 de octubre, 2025 | 10:00 PM - 11:00 PM (UTC) đź”— RegĂstrate para la transmisiĂłn en Reactor En la tercera sesiĂłn, exploraremos RAG, una tĂ©cnica que envĂa contexto al LLM para obtener respuestas más precisas dentro de un dominio especĂfico. Usaremos distintas fuentes de datos —CSVs, páginas web, documentos, bases de datos— y construiremos una app RAG full-stack con Azure AI Search. Modelos de VisiĂłn đź“… 14 de octubre, 2025 | 10:00 PM - 11:00 PM (UTC) đź”— RegĂstrate para la transmisiĂłn en Reactor ¡La cuarta sesiĂłn trata sobre modelos de visiĂłn como GPT-4o y 4o-mini! Estos modelos pueden procesar texto e imágenes, generando descripciones, extrayendo datos, respondiendo preguntas o clasificando contenido. Usaremos Python para enviar imágenes a los modelos, crear una app de chat con imágenes e integrarlos en flujos RAG. 👉 Si querĂ©s seguir los ejemplos en vivo, asegurate de tener una cuenta de GitHub. Salidas Estructuradas đź“… 15 de octubre, 2025 | 10:00 PM - 11:00 PM (UTC) đź”— RegĂstrate para la transmisiĂłn en Reactor En la quinta sesiĂłn aprenderemos a hacer que los LLMs generen respuestas estructuradas segĂşn un esquema. Exploraremos el modo structured outputs de OpenAI y cĂłmo aplicarlo para extracciĂłn de entidades, clasificaciĂłn y flujos con agentes. 👉 Si querĂ©s seguir los ejemplos en vivo, asegurate de tener una cuenta de GitHub. Calidad y Seguridad đź“… 16 de octubre, 2025 | 10:00 PM - 11:00 PM (UTC) đź”— RegĂstrate para la transmisiĂłn en Reactor En la sexta sesiĂłn hablaremos sobre cĂłmo usar IA de manera segura y evaluar la calidad de las salidas. Mostraremos cĂłmo configurar Azure AI Content Safety, manejar errores en cĂłdigo Python y evaluar resultados con el SDK de EvaluaciĂłn de Azure AI. Tool Calling đź“… 21 de octubre, 2025 | 10:00 PM - 11:00 PM (UTC) đź”— RegĂstrate para la transmisiĂłn en Reactor En la Ăşltima semana de la serie, nos enfocamos en tool calling (function calling), la base para construir agentes de IA. Aprenderemos a definir herramientas en Python o JSON, manejar respuestas de los modelos y habilitar llamadas paralelas y mĂşltiples iteraciones. 👉 Si querĂ©s seguir los ejemplos en vivo, asegurate de tener una cuenta de GitHub. Agentes de IA đź“… 22 de octubre, 2025 | 10:00 PM - 11:00 PM (UTC) đź”— RegĂstrate para la transmisiĂłn en Reactor ¡En la penĂşltima sesiĂłn construiremos agentes de IA! Usaremos frameworks como Langgraph, Semantic Kernel, Autogen, y Pydantic AI. Empezaremos con ejemplos simples y avanzaremos a arquitecturas más complejas como round-robin, supervisor, graphs y ReAct. Model Context Protocol (MCP) đź“… 23 de octubre, 2025 | 10:00 PM - 11:00 PM (UTC) đź”— RegĂstrate para la transmisiĂłn en Reactor Cerramos la serie con Model Context Protocol (MCP), la tecnologĂa abierta más candente de 2025. Aprenderás a usar FastMCP para crear un servidor MCP local y conectarlo a chatbots como GitHub Copilot. TambiĂ©n veremos cĂłmo integrar MCP con frameworks de agentes como Langgraph, Semantic Kernel y Pydantic AI. Y, por supuesto, hablaremos sobre los riesgos de seguridad y las mejores prácticas para desarrolladores. ÂżQuerĂ©s que lo reformatee para publicaciĂłn en Markdown (para blogs o repos) o en texto plano con emojis y separadores estilo redes sociales?Level up your Python Gen AI Skills from our free nine-part YouTube series!
Want to learn how to use generative AI models in your Python applications? We're putting on a series of nine live streams, in both English and Spanish, all about generative AI. We'll cover large language models, embedding models, vision models, introduce techniques like RAG, function calling, and structured outputs, and show you how to build Agents and MCP servers. Plus we'll talk about AI safety and evaluations, to make sure all your models and applications are producing safe outputs. 🔗 Register for the entire series. In addition to the live streams, you can also join a weekly office hours in our AI Discord to ask any questions that don't get answered in the chat. You can also scroll down to learn about each live stream and register for individual sessions. See you in the streams! 👋🏻 Large Language Models 7 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor Join us for the first session in our Python + AI series! In this session, we'll talk about Large Language Models (LLMs), the models that power ChatGPT and GitHub Copilot. We'll use Python to interact with LLMs using popular packages like the OpenAI SDK and Langchain. We'll experiment with prompt engineering and few-shot examples to improve our outputs. We'll also show how to build a full stack app powered by LLMs, and explain the importance of concurrency and streaming for user-facing AI apps. Vector embeddings 8 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor In our second session of the Python + AI series, we'll dive into a different kind of model: the vector embedding model. A vector embedding is a way to encode a text or image as an array of floating point numbers. Vector embeddings make it possible to perform similarity search on many kinds of content. In this session, we'll explore different vector embedding models, like the OpenAI text-embedding-3 series, with both visualizations and Python code. We'll compare distance metrics, use quantization to reduce vector size, and try out multimodal embedding models. Retrieval Augmented Generation 9 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor In our fourth Python + AI session, we'll explore one of the most popular techniques used with LLMs: Retrieval Augmented Generation. RAG is an approach that sends context to the LLM so that it can provide well-grounded answers for a particular domain. The RAG approach can be used with many kinds of data sources like CSVs, webpages, documents, databases. In this session, we'll walk through RAG flows in Python, starting with a simple flow and culminating in a full-stack RAG application based on Azure AI Search. Vision models 14 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor Our third stream in the Python + AI series is all about vision models! Vision models are LLMs that can accept both text and images, like GPT 4o and 4o-mini. You can use those models for image captioning, data extraction, question-answering, classification, and more! We'll use Python to send images to vision models, build a basic chat-on-images app, and build a multimodal search engine. Structured outputs 15 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor In our fifth stream of the Python + AI series, we'll discover how to get LLMs to output structured responses that adhere to a schema. In Python, all we need to do is define a @dataclass or a Pydantic BaseModel, and we get validated output that meets our needs perfectly. We'll focus on the structured outputs mode available in OpenAI models, but you can use similar techniques with other model providers. Our examples will demonstrate the many ways you can use structured responses, like entity extraction, classification, and agentic workflows. Quality and safety 16 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor Now that we're more than halfway through our Python + AI series, we're covering a crucial topic: how to use AI safely, and how to evaluate the quality of AI outputs. There are multiple mitigation layers when working with LLMs: the model itself, a safety system on top, the prompting and context, and the application user experience. Our focus will be on Azure tools that make it easier to put safe AI systems into production. We'll show how to configure the Azure AI Content Safety system when working with Azure AI models, and how to handle those errors in Python code. Then we'll use the Azure AI Evaluation SDK to evaluate the safety and quality of the output from our LLM. Tool calling 21 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor Now that we're more than halfway through our Python + AI series, we're covering a crucial topic: how to use AI safely, and how to evaluate the quality of AI outputs. There are multiple mitigation layers when working with LLMs: the model itself, a safety system on top, the prompting and context, and the application user experience. Our focus will be on Azure tools that make it easier to put safe AI systems into production. We'll show how to configure the Azure AI Content Safety system when working with Azure AI models, and how to handle those errors in Python code. Then we'll use the Azure AI Evaluation SDK to evaluate the safety and quality of the output from our LLM. AI agents 22 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor For the penultimate session of our Python + AI series, we're building AI agents! We'll use many of the most popular Python AI agent frameworks: Langgraph, Semantic Kernel, Autogen, Pydantic AI, and more. Our agents will start simple and then ramp up in complexity, demonstrating different architectures like hand-offs, round-robin, supervisor, graphs, and ReAct. Model Context Protocol 23 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor In the final session of our Python + AI series, we're diving into the hottest technology of 2025: MCP, Model Context Protocol. This open protocol makes it easy to extend AI agents and chatbots with custom functionality, to make them more powerful and flexible. We'll show how to use the official Python FastMCP SDK to build an MCP server running locally and consume that server from chatbots like GitHub Copilot. Then we'll build our own MCP client to consume the server. Finally, we'll discover how easy it is to point popular AI agent frameworks like Langgraph, Pydantic AI, and Semantic Kernel at MCP servers. With great power comes great responsibility, so we will briefly discuss the many security risks that come with MCP, both as a user and developer.Essential Microsoft Resources for MVPs & the Tech Community from the AI Tour
Unlock the power of Microsoft AI with redeliverable technical presentations, hands-on workshops, and open-source curriculum from the Microsoft AI Tour! Whether you’re a Microsoft MVP, Developer, or IT Professional, these expertly crafted resources empower you to teach, train, and lead AI adoption in your community. Explore top breakout sessions covering GitHub Copilot, Azure AI, Generative AI, and security best practices—designed to simplify AI integration and accelerate digital transformation. Dive into interactive workshops that provide real-world applications of AI technologies. Take it a step further with Microsoft’s Open-Source AI Curriculum, offering beginner-friendly courses on AI, Machine Learning, Data Science, Cybersecurity, and GitHub Copilot—perfect for upskilling teams and fostering innovation. Don’t just learn—lead. Access these resources, host impactful training sessions, and drive AI adoption in your organization. Start sharing today! Explore now: Microsoft AI Tour Resources.