azure
185 TopicsAI Upskilling Framework Level 3 Building
The Global AI Community is excited to bring you the latest updates on AI Upskilling Framework Level 3 Building, straight from Microsoft Ignite! This session dives deep into advanced concepts for building agentic workflows and showcases new announcements that will help developers accelerate their Agentic AI journey.On‑Device AI with Windows AI Foundry and Foundry Local
From “waiting” to “instant”- without sending data away AI is everywhere, but speed, privacy, and reliability are critical. Users expect instant answers without compromise. On-device AI makes that possible: fast, private and available, even when the network isn’t - empowering apps to deliver seamless experiences. Imagine an intelligent assistant that works in seconds, without sending a text to the cloud. This approach brings speed and data control to the places that need it most; while still letting you tap into cloud power when it makes sense. Windows AI Foundry: A Local Home for Models Windows AI Foundry is a developer toolkit that makes it simple to run AI models directly on Windows devices. It uses ONNX Runtime under the hood and can leverage CPU, GPU (via DirectML), or NPU acceleration, without requiring you to manage those details. The principle is straightforward: Keep the model and the data on the same device. Inference becomes faster, and data stays local by default unless you explicitly choose to use the cloud. Foundry Local Foundry Local is the engine that powers this experience. Think of it as local AI runtime - fast, private, and easy to integrate into an app. Why Adopt On‑Device AI? Faster, more responsive apps: Local inference often reduces perceived latency and improves user experience. Privacy‑first by design: Keep sensitive data on the device; avoid cloud round trips unless the user opts in. Offline capability: An app can provide AI features even without a network connection. Cost control: Reduce cloud compute and data costs for common, high‑volume tasks. This approach is especially useful in regulated industries, field‑work tools, and any app where users expect quick, on‑device responses. Hybrid Pattern for Real Apps On-device AI doesn’t replace the cloud, it complements it. Here’s how: Standalone On‑Device: Quick, private actions like document summarization, local search, and offline assistants. Cloud‑Enhanced (Optional): Large-context models, up-to-date knowledge, or heavy multimodal workloads. Design an app to keep data local by default and surface cloud options transparently with user consent and clear disclosures. Windows AI Foundry supports hybrid workflows: Use Foundry Local for real-time inference. Sync with Azure AI services for model updates, telemetry, and advanced analytics. Implement fallback strategies for resource-intensive scenarios. Application Workflow Code Example using Foundry Local: 1. Only On-Device: Tries Foundry Local first, falls back to ONNX if foundry_runtime.check_foundry_available(): # Use on-device Foundry Local models try: answer = foundry_runtime.run_inference(question, context) return answer, source="Foundry Local (On-Device)" except Exception as e: logger.warning(f"Foundry failed: {e}, trying ONNX...") if onnx_model.is_loaded(): # Fallback to local BERT ONNX model try: answer = bert_model.get_answer(question, context) return answer, source="BERT ONNX (On-Device)" except Exception as e: logger.warning(f"ONNX failed: {e}") return "Error: No local AI available" 2. Hybrid approach: On-device first, cloud as last resort def get_answer(question, context): """ Priority order: 1. Foundry Local (best: advanced + private) 2. ONNX Runtime (good: fast + private) 3. Cloud API (fallback: requires internet, less private) # in case of Hybrid approach, based on real-time scenario """ if foundry_runtime.check_foundry_available(): # Use on-device Foundry Local models try: answer = foundry_runtime.run_inference(question, context) return answer, source="Foundry Local (On-Device)" except Exception as e: logger.warning(f"Foundry failed: {e}, trying ONNX...") if onnx_model.is_loaded(): # Fallback to local BERT ONNX model try: answer = bert_model.get_answer(question, context) return answer, source="BERT ONNX (On-Device)" except Exception as e: logger.warning(f"ONNX failed: {e}, trying cloud...") # Last resort: Cloud API (requires internet) if network_available(): try: import requests response = requests.post( '{BASE_URL_AI_CHAT_COMPLETION}', headers={'Authorization': f'Bearer {API_KEY}'}, json={ 'model': '{MODEL_NAME}', 'messages': [{ 'role': 'user', 'content': f'Context: {context}\n\nQuestion: {question}' }] }, timeout=10 ) answer = response.json()['choices'][0]['message']['content'] return answer, source="Cloud API (Online)" except Exception as e: return "Error: No AI runtime available", source="Failed" else: return "Error: No internet and no local AI available", source="Offline" Demo Project Output: Foundry Local answering context-based questions offline : The Foundry Local engine ran the Phi-4-mini model offline and retrieved context-based data. : The Foundry Local engine ran the Phi-4-mini model offline and mentioned that there is no answer. Practical Use Cases Privacy-First Reading Assistant: Summarize documents locally without sending text to the cloud. Healthcare Apps: Analyze medical data on-device for compliance. Financial Tools: Risk scoring without exposing sensitive financial data. IoT & Edge Devices: Real-time anomaly detection without network dependency. Conclusion On-device AI isn’t just a trend - it’s a shift toward smarter, faster, and more secure applications. With Windows AI Foundry and Foundry Local, developers can deliver experiences that respect user specific data, reduce latency, and work even when connectivity fails. By combining local inference with optional cloud enhancements, you get the best of both worlds: instant performance and scalable intelligence. Whether you’re creating document summarizers, offline assistants, or compliance-ready solutions, this approach ensures your apps stay responsive, reliable, and user-centric. References Get started with Foundry Local - Foundry Local | Microsoft Learn What is Windows AI Foundry? | Microsoft Learn https://devblogs.microsoft.com/foundry/unlock-instant-on-device-ai-with-foundry-local/Unlocking Your First AI Solution on Azure: Practical Paths for Developers of All Backgrounds
Over the past several months, I’ve spent hundreds of hours working directly with teams—from small startups to mid-market innovators—who share the same aspiration: “We want to use AI, but where do we start?” This question comes up everywhere. It crosses industries, geographies, skill levels, and team sizes. And as developers, we often feel the pressure to “solve AI” end-to-end—model selection, prompt engineering, security, deployment pipelines, integration…. The list is long, and the learning curve can feel even longer. But here’s what we’ve learned through our work in the SMB space and what we recently shared at Microsoft Ignite (Session OD1210). The first mile of AI doesn’t have to be complex. You don’t need an army of engineers, and you don’t need to start from scratch. You just need the right path. In our Ignite on-demand session with UnifyCloud, we demonstrated two fast, developer-friendly ways to get your first AI workload running on Azure—both grounded in real-world patterns we see every day. Path 1: Build Quickly with Microsoft Foundry Templates Microsoft Foundry gives developers pre-built, customizable templates that dramatically reduce setup time. In the session, I walked through how to deploy a fully functioning AI chatbot using: Azure AI Foundry GitHub (via the Azure Samples “Get Started with AI Chat” repo) Azure Cloudshell for deployment And zero specialized infra prep With five lines of code and a few clicks, you can spin up a secure internal chatbot tailored for your business. Want responses scoped to your internal content? Want control over the model, costs, or safety filters? Want to plug in your own data sources like SharePoint, Blob Storage, or uploaded docs? You can do all of that—easily and on your terms. This “build fast” path is ideal for: Developers who want control and extensibility Teams validating AI use cases Scenarios where data governance matters Lightweight experimentation without heavy architecture upfront And most importantly, you can scale it later. Path 2: Buy a Production-Ready Solution from a Trusted Partner Not every team wants to build. Not every team has the time, the resources, or the desire to compose their own AI stack. That’s why we showcased the “buy” path with UnifyCloud’s AI Factory, a Marketplace-listed solution that lets customers deploy mature AI capabilities directly into their Azure environment, complete with optional support, management, and best practices. In the demo, UnifyCloud’s founder Vivek Bhatnagar walked through: How to navigate Microsoft Marketplace How to evaluate solution listings How to review pricing plans and support tiers How to deploy a partner-built AI app with just a few clicks How customers can accelerate their time to value without implementation overhead This path is perfect when you want: A production-ready AI solution A supported, maintained experience Minimal engineering lift Faster time to outcome Why Azure? Why Now? During the session, we also outlined three reasons developers are choosing Azure for their first AI workloads: 1. Secure, governed, safe by design Azure mitigates risk with always-on guardrails and built-in commitments to security, privacy, and policy-based control. 2. Built for production with a complete AI platform From models to agents to tools and data integrations, Azure provides an enterprise-grade environment developers can trust. 3. Developer-first innovation with agentic DevOps Azure puts developers at the center, integrating AI across the software development lifecycle to help teams build faster and smarter. The Session: Build or Buy—Two Paths, One Goal Whether you build using Azure AI Foundry or buy through Marketplace, the goal is the same: Help teams get to their first AI solution quickly, confidently, and securely. You don’t need a massive budget. You don’t need deep ML experience. You don’t need a full-time AI team. What you need is a path that matches your skills, your constraints, and your timeline. Watch the Full Ignite Session You can watch the full session on-demand now also on YouTube: OD1201 — “Unlock Your First AI Solution on Azure” It includes: The full build and buy demos Partner perspectives Deployment walkthroughs And guidance you can take back to your teams today If you want to explore the same build path we showed at Ignite: ➡️ Azure Samples – Get Started with AI Chat https://github.com/Azure-Samples/get-started-with-ai-chat Deploy it, customize it, attach your data sources, and extend it. It’s a great starting point. If you’re curious about the Marketplace path: ➡️ Search for “UnifyCloud AI Factory” on Microsoft Marketplace You’ll see support offerings, solution details, and deployment instructions. Closing Thought The gap between wanting to adopt AI and actually running AI in production is shrinking fast. Azure makes it possible for teams, especially those without deep AI experience, to take meaningful steps today. No perfect architecture required. No million-dollar budget. No wait for a future-state roadmap. Just two practical paths: Build quickly. Buy confidently. Start now. If you have questions, ideas, or want to share what you’re building, feel free to reach out here in the Developer Community. I’d love to hear what you’re creating. — Joshua Huang Microsoft AzureBackground tasks in .NET
What is a Background Task? A background task (or background service) is work that runs behind the scenes in an application without blocking the main user flow and often without direct user interaction. Think of it as a worker or helper that performs tasks independently while the main app continues doing other things. Problem Statement - What do you do when your downstream API is flaky or sometimes down for hours or even days , yet your UI and main API must stay responsive? Solution - This is a very common architecture problem in enterprise systems, and .NET gives us excellent tools to solve it cleanly: BackgroundService and exponential backoff retry logic. In this article, I’ll walk you through: A real production-like use case The architecture needed to make it reliable Why exponential backoff matters How to build a robust BackgroundService A full working code example The Use Case You have two APIs: API 1 : called frequently by the UI (hundreds or thousands of times). API 2 : a downstream system you must call, but it is known to be unstable, slow, or completely offline for long periods. If API 1 directly calls API 2: * Users experience lag * API 1 becomes slow or unusable * You overload API 2 with retries * Calls fail when API 2 is offline * You lose data What do we do then? Here goes the solution The Architecture Instead of calling API 2 synchronously, API 1 simply stores the intended call, and returns immediately. A BackgroundService will later: Poll for pending jobs Call API 2 Retry with exponential backoff if API 2 is still unavailable Mark jobs as completed when successful This creates a resilient, smooth, non-blocking system. Why Exponential Backoff? When a downstream API is completely offline, retrying every 1–5 seconds is disastrous: It wastes CPU and bandwidth It floods logs It overloads API 2 when it comes back online It burns resources Exponential backoff solves this. Examples retry delays: Retry 1 → 2 sec Retry 2 → 4 sec Retry 3 → 8 sec Retry 4 → 16 sec Retry 5 → 32 sec Retry 6 → 64 sec (Max delay capped at 5 minutes) This gives the system room to breathe. Complete Working Example (Using In-Memory Store) 1. The Model public class PendingJob { public Guid Id { get; set; } = Guid.NewGuid(); public string Payload { get; set; } = string.Empty; public int RetryCount { get; set; } = 0; public DateTime NextRetryTime { get; set; } = DateTime.UtcNow; public bool Completed { get; set; } = false; } 2. The In-Memory Store public interface IPendingJobStore { Task AddJobAsync(string payload); Task<List<PendingJob>> GetExecutableJobsAsync(); Task MarkJobAsCompletedAsync(Guid jobId); Task UpdateJobAsync(PendingJob job); } public class InMemoryPendingJobStore : IPendingJobStore { private readonly List<PendingJob> _jobs = new(); private readonly object _lock = new(); public Task AddJobAsync(string payload) { lock (_lock) { _jobs.Add(new PendingJob { Payload = payload, RetryCount = 0, NextRetryTime = DateTime.UtcNow }); } return Task.CompletedTask; } public Task<List<PendingJob>> GetExecutableJobsAsync() { lock (_lock) { return Task.FromResult(_jobs.Where(j => !j.Completed && j.NextRetryTime <= DateTime.UtcNow).ToList()); } } public Task MarkJobAsCompletedAsync(Guid jobId) { lock (_lock) { var job = _jobs.FirstOrDefault(j => j.Id == jobId); if (job != null) job.Completed = true; } return Task.CompletedTask; } public Task UpdateJobAsync(PendingJob job) => Task.CompletedTask; } 3. The BackgroundService with Exponential Backoff using System.Text; public class Api2RetryService : BackgroundService { private readonly IHttpClientFactory _clientFactory; private readonly IPendingJobStore _store; private readonly ILogger<Api2RetryService> _logger; public Api2RetryService(IHttpClientFactory clientFactory, IPendingJobStore store, ILogger<Api2RetryService> logger) { _clientFactory = clientFactory; _store = store; _logger = logger; } protected override async Task ExecuteAsync(CancellationToken stoppingToken) { _logger.LogInformation("Background retry service started."); while (!stoppingToken.IsCancellationRequested) { var jobs = await _store.GetExecutableJobsAsync(); foreach (var job in jobs) { var client = _clientFactory.CreateClient("api2"); try { var response = await client.PostAsync("/simulate", new StringContent(job.Payload, Encoding.UTF8, "application/json"), stoppingToken); if (response.IsSuccessStatusCode) { _logger.LogInformation("Job {JobId} processed successfully.", job.Id); await _store.MarkJobAsCompletedAsync(job.Id); } else { await HandleFailure(job); } } catch (Exception ex) { _logger.LogError(ex, "Error calling API 2."); await HandleFailure(job); } } await Task.Delay(TimeSpan.FromSeconds(5), stoppingToken); } } private async Task HandleFailure(PendingJob job) { job.RetryCount++; var delay = CalculateBackoff(job.RetryCount); job.NextRetryTime = DateTime.UtcNow.Add(delay); await _store.UpdateJobAsync(job); _logger.LogWarning("Retrying job {JobId} in {Delay}. RetryCount={RetryCount}", job.Id, delay, job.RetryCount); } private TimeSpan CalculateBackoff(int retryCount) { var seconds = Math.Pow(2, retryCount); var maxSeconds = TimeSpan.FromMinutes(5).TotalSeconds; return TimeSpan.FromSeconds(Math.Min(seconds, maxSeconds)); } } 4. The API 1 — Public Endpoint using System.Runtime.InteropServices; using System.Text.Json; [ApiController] [Route("api1")] public class Api1Controller : ControllerBase { private readonly IPendingJobStore _store; private readonly ILogger<Api1Controller> _logger; public Api1Controller(IPendingJobStore store, ILogger<Api1Controller> logger) { _store = store; _logger = logger; } [HttpPost("process")] public async Task<IActionResult> Process([FromBody] object data) { var payload = JsonSerializer.Serialize(data); await _store.AddJobAsync(payload); _logger.LogInformation("Stored job for background processing."); return Ok("Request received. Will process when API 2 recovers."); } } 5. The API 2 (Simulating Downtime) using System.Runtime.InteropServices; [ApiController][Route("api2")] public class Api2Controller: ControllerBase { private static bool shouldFail = true; [HttpPost("simulate")] public IActionResult Simulate([FromBody] object payload) { if (shouldFail) return StatusCode(503, "API 2 is down"); return Ok("API 2 processed payload"); } [HttpPost("toggle")] public IActionResult Toggle() { shouldFail = !shouldFail; return Ok($"API 2 failure mode = {shouldFail}"); } } 6. The Program.cs var builder = WebApplication.CreateBuilder(args); builder.Services.AddControllers(); builder.Services.AddSingleton<IPendingJobStore, InMemoryPendingJobStore>(); builder.Services.AddHttpClient("api2", c => { c.BaseAddress = new Uri("http://localhost:5000/api2"); }); builder.Services.AddHostedService<Api2RetryService>(); var app = builder.Build(); app.MapControllers(); app.Run(); Testing the Whole Flow #1 API 2 starts in failure mode All attempts will fail and exponential backoff kicks in. #2 Send a request to API 1 POST /api1/process { "name": "hello" } Job is stored. #3 Watch logs You’ll see: Retrying job in 2 seconds... Retrying job in 4 seconds... Retrying job in 8 seconds... ... #4 Bring API 2 back online: POST /api2/toggle Next retry will succeed: Job {id} processed successfully. Conclusion This pattern is extremely powerful for: Payment gateways ERP integrations Long-running partner APIs Unstable third-party services Internal microservices that spike or go offline References Background tasks with hosted services in ASP.NET CoreAI Toolkit Extension Pack for Visual Studio Code: Ignite 2025 Update
Unlock the Latest Agentic App Capabilities The Ignite 2025 update delivers a major leap forward for the AI Toolkit extension pack in VS Code, introducing a unified, end-to-end environment for building, visualizing, and deploying agentic applications to Microsoft Foundry, and the addition of Anthropic’s frontier Claude models in the Model Catalog! This release enables developers to build and debug locally in VS Code, then deploy to the cloud with a single click. Seamlessly switch between VS Code and the Foundry portal for visualization, orchestration, and evaluation, creating a smooth roundtrip workflow that accelerates innovation and delivers a truly unified AI development experience. Download the http://aka.ms/aitoolkit today and start building next-generation agentic apps in VS Code! What Can You Do with the AI Toolkit Extension Pack? Access Anthropic models in the Model Catalog Following the Microsoft, NVIDIA and Anthropic strategic partnerships announcement today, we are excited to share that Anthropic’s frontier Claude models including Claude Sonnet 4.5, Claude Opus 4.1, and Claude Haiku 4.5, are now integrated into the AI Toolkit, providing even more choices and flexibility when building intelligent applications and AI agents. Build AI Agents Using GitHub Copilot Scaffold agent applications using best-practice patterns, tool-calling examples, tracing hooks, and test scaffolds, all powered by Copilot and aligned with the Microsoft Agent Framework. Generate agent code in Python or .NET, giving you flexibility to target your preferred runtime. Build and Customize YAML Workflows Design YAML-based workflows in the Foundry portal, then continue editing and testing directly in VS Code. To customize your YAML-based workflows, instantly convert it to Agent Framework code using GitHub Copilot. Upgrade from declarative design to code-first customization without starting from scratch. Visualize Multi-Agent Workflows Envision your code-based agent workflows with an interactive graph visualizer that reveals each component and how they connect Watch in real-time how each node lights up as you run your agent. Use the visualizer to understand and debug complex agent graphs, making iteration fast and intuitive. Experiment, Debug, and Evaluate Locally Use the Hosted Agents Playground to quickly interact with your agents on your development machine. Leverage local tracing support to debug reasoning steps, tool calls, and latency hotspots—so you can quickly diagnose and fix issues. Define metrics, tasks, and datasets for agent evaluation, then implement metrics using the Foundry Evaluation SDK and orchestrate evaluations runs with the help of Copilot. Seamless Integration Across Environments Jump from Foundry Portal to VS Code Web for a development environment in your preferred code editor setting. Open YAML workflows, playgrounds, and agent templates directly in VS Code for editing and deployment. How to Get Started Install the AI Toolkit extension pack from the VS Code marketplace. Check out documentation. Get started with building workflows with Microsoft Foundry in VS Code 1. Work with Hosted (Pro-code) Agent workflows in VS Code 2. Work with Declarative (Low-code) Agent workflows in VS Code Feedback & Support Try out the extensions and let us know what you think! File issues or feedback on our GitHub repo for Foundry extension and AI Toolkit extension. Your input helps us make continuous improvements.2.2KViews4likes0CommentsSimplifying Microservice Reliability with Dapr
What is Dapr? Dapr is an open-source runtime developed by Microsoft that is used in building resilient, event-driven, and portable applications. It works using the sidecar pattern, meaning every microservice gets a small companion container — the Dapr sidecar — which handles communication, retries, secrets, state, and more. What is Sidecar ? A sidecar is a helper process that runs beside your app, handling system tasks so your code can focus on business logic. Lets see some offerings from Dapr along with examples. #1 . Bindings Connects your app to external systems (like queues, email, or storage) with zero SDK or protocol handling. Without Dapr ❌ var httpClient = new HttpClient(); await httpClient.PostAsJsonAsync("https://api.sendgrid.com/send", email); * Manage HTTP endpoints & credentials * Change provider → rewrite logic With Dapr ✅ await daprClient.InvokeBindingAsync("send-email", "create", email); * One call, no SDK * Replace SendGrid → SMTP → Twilio just by editing config * No code change, no redeploy How to enable binding in for a Azure Container App Open Azure Portal → go to your Container App Environment. From the left pane, click on Container Apps, and choose your desired app (e.g., orders-api). In the Settings section, select Dapr. Enable Dapr toggle → switch it ON. Provide the basic Dapr settings: App ID: A unique name (e.g., orders-app). App Port: The internal port your API listens on (e.g., 8080). App Protocol: Choose HTTP or gRPC (usually HTTP). Click Save to apply. Now, under the same Container App Environment, go to Dapr Components. Click Create → select Binding → choose the type of binding (e.g., azure.storagequeues). #2 . Configuration Centralizes app settings, allowing live configuration updates without redeploying services. Without Dapr ❌ var featureFlag = Configuration["FeatureX"]; * Requires redeploys for every config change * No centralized versioning or dynamic update With Dapr ✅ var config = await daprClient.GetConfiguration("appconfigstore", new[] { "FeatureX" }); * Use Azure App Config, Consul, or any provider * Centralized updates — no redeploys * Consistent access via Dapr SDK How to enable configuration in for a Azure Container App Open Azure Portal → go to your Container App Environment. From the left pane, click on Container Apps, and choose your desired app (e.g., orders-api). In the Settings section, select Dapr. Enable Dapr toggle → switch it ON. Provide the basic Dapr settings: App ID: A unique name (e.g., orders-app). App Port: The internal port your API listens on (e.g., 8080). App Protocol: Choose HTTP or gRPC (usually HTTP). Click Save to apply. Now, under the same Container App Environment, go to Dapr Components. Click Create → select Configuration→ choose the type of configuration (e.g., configuration.azure.appconfig). #3 . Pub/Sub Enables event-driven communication between microservices without needing to know each other's endpoints. Without Dapr ❌ var client = new ServiceBusClient("<connection-string>"); var sender = client.CreateSender("order-topic"); await sender.SendMessageAsync(new ServiceBusMessage(orderJson)); * Tied to Azure Service Bus * Must manage SDKs, connections, retries * Hard to switch to another broker (Kafka, RabbitMQ) With Dapr ✅ await daprClient.PublishEventAsync("pubsub", "order-created", order); * pubsub component defined in YAML (can be Kafka, Redis Streams, etc.) * No SDK, no broker dependency * Just publish the event — Dapr handles transport & retries How to enable pub/sub in for a Azure Container App Open Azure Portal → go to your Container App Environment. From the left pane, click on Container Apps, and choose your desired app (e.g., orders-api). In the Settings section, select Dapr. Enable Dapr toggle → switch it ON. Provide the basic Dapr settings: App ID: A unique name (e.g., orders-app). App Port: The internal port your API listens on (e.g., 8080). App Protocol: Choose HTTP or gRPC (usually HTTP). Click Save to apply. Now, under the same Container App Environment, go to Dapr Components. Click Create → select Pub/Sub→ choose the type of configuration (e.g., pubsub.azure.servicebus.topics). #4 . Secret Stores Securely retrieves credentials and secrets from vaults, keeping them out of configs and code. Without Dapr ❌ var connString = Configuration["ConnectionStrings:DB"]; * Secrets stored in configs or env vars * Risk of leaks and manual rotation With Dapr ✅ var secret = await daprClient.GetSecretAsync("vault", "dbConnection"); * Fetch directly from Azure Key Vault, AWS Secrets, etc. * No secrets in configs * Secure by default, consistent across services How to enable Secret Stores in for a Azure Container App Open Azure Portal → go to your Container App Environment. From the left pane, click on Container Apps, and choose your desired app (e.g., orders-api). In the Settings section, select Dapr. Enable Dapr toggle → switch it ON. Provide the basic Dapr settings: App ID: A unique name (e.g., orders-app). App Port: The internal port your API listens on (e.g., 8080). App Protocol: Choose HTTP or gRPC (usually HTTP). Click Save to apply. Now, under the same Container App Environment, go to Dapr Components. Click Create → select Secret stores→ choose the type of configuration (e.g., secretstores.azure.keyvault). #5 . State Provides a consistent way to store and retrieve application data across services using a simple API. Without Dapr ❌ var cosmosClient = new CosmosClient(connStr); var container = cosmosClient.GetContainer("db", "state"); await container.UpsertItemAsync(order); * Direct dependency on Cosmos DB * Manual retry logic * Tight coupling to storage type With Dapr ✅ await daprClient.SaveStateAsync("statestore", "order-101", order); var data = await daprClient.GetStateAsync<Order>("statestore", "order-101"); * Plug any backend (Redis, Cosmos, PostgreSQL) * Dapr handles retries and consistency * Same code, different backend — total flexibility How to enable State in for a Azure Container App Open Azure Portal → go to your Container App Environment. From the left pane, click on Container Apps, and choose your desired app (e.g., orders-api). In the Settings section, select Dapr. Enable Dapr toggle → switch it ON. Provide the basic Dapr settings: App ID: A unique name (e.g., orders-app). App Port: The internal port your API listens on (e.g., 8080). App Protocol: Choose HTTP or gRPC (usually HTTP). Click Save to apply. Now, under the same Container App Environment, go to Dapr Components. Click Create → select State → choose the type of configuration (e.g., state.azure.cosmosdb). 🧩 Summary Think of Dapr as your invisible co-pilot for building distributed apps. It abstracts away all the repetitive plumbing — state management, pub/sub messaging, secret handling, and external bindings — letting you focus on writing features that matter. With Dapr, you don’t just write code that runs locally; you write code that just works across clouds, containers, and environments, without having to worry about wiring up retries, event delivery, or service-to-service communication manually. 🧰 Demo Source Code I've prepared complete sample on .Net core that touches all major Dapr features: * State Store * Pub/Sub * Bindings * Configuration * Secret Store You can explore it from Github-Dapr-Api Clone, run locally, and experiment — the project uses in-memory storage to keep things lightweight for testing and learning. 📚 References for Deep Dive Official Dapr Docs Dapr for .NET Developers — Microsoft Learn Dapr .NET SDK GitHubPython + IA: Resumen y Recursos
Acabamos de concluir nuestra serie sobre Python + IA, un recorrido completo de nueve sesiones donde exploramos a fondo cómo usar modelos de inteligencia artificial generativa desde Python. Durante la serie presentamos varios tipos de modelos, incluyendo LLMs, modelos de embeddings y modelos de visión. Profundizamos en técnicas populares como RAG, tool calling y salidas estructuradas. Evaluamos la calidad y seguridad de la IA mediante evaluaciones automatizadas y red-teaming. Finalmente, desarrollamos agentes de IA con frameworks populares de Python y exploramos el nuevo Model Context Protocol (MCP). Para que puedas aplicar lo aprendido, todos nuestros ejemplos funcionan con GitHub Models, un servicio que ofrece modelos gratuitos a todos los usuarios de GitHub para experimentación y aprendizaje. Aunque no hayas asistido a las sesiones en vivo, ¡todavía puedes acceder a todos los materiales usando los enlaces de abajo! Si eres instructor, puedes usar las diapositivas y el código en tus propias clases. Python + IA: Modelos de Lenguaje Grandes (LLMs) 📺 Ver grabación En esta sesión exploramos los LLMs, los modelos que impulsan ChatGPT y GitHub Copilot. Usamos Python con paquetes como OpenAI SDK y LangChain, experimentamos con prompt engineering y ejemplos few-shot, y construimos una aplicación completa basada en LLMs. También explicamos la importancia de la concurrencia y el streaming en apps de IA. Diapositivas: aka.ms/pythonia/diapositivas/llms Código: python-openai-demos Guía de repositorio: video Python + IA: Embeddings Vectoriales 📺 Ver grabación En nuestra segunda sesión, aprendemos sobre los modelos de embeddings vectoriales, que convierten texto o imágenes en arreglos numéricos. Comparamos métricas de distancia, aplicamos cuantización y experimentamos con modelos multimodales. Diapositivas: aka.ms/pythonia/diapositivas/embeddings Código: vector-embedding-demos Guía de repositorio: video Python + IA: Retrieval Augmented Generation (RAG) 📺 Ver grabación Descubrimos cómo usar RAG para mejorar las respuestas de los LLMs añadiendo contexto relevante. Construimos flujos RAG en Python con distintas fuentes (CSVs, sitios web, documentos y bases de datos) y terminamos con una aplicación completa basada en Azure AI Search. Diapositivas: aka.ms/pythonia/diapositivas/rag Código: python-openai-demos Guía de repositorio: video Python + IA: Modelos de Visión 📺 Ver grabación Los modelos de visión aceptan texto e imágenes, como GPT-4o y GPT-4o mini. Creamos una app de chat con imágenes, realizamos extracción de datos y construimos un motor de búsqueda multimodal. Diapositivas: aka.ms/pythonia/diapositivas/vision Código: vector-embeddings Guía de repositorio: video Python + IA: Salidas Estructuradas 📺 Ver grabación Aprendemos a generar respuestas estructuradas con LLMs usando Pydantic BaseModel. Este enfoque permite validación automática de los resultados, útil para extracción de entidades, clasificación y flujos de agentes. Diapositivas: aka.ms/pythonia/diapositivas/salidas Código: python-openai-demos y entity-extraction-demos Guía de repositorio: video Python + IA: Calidad y Seguridad 📺 Ver grabación Analizamos cómo usar la IA de forma segura y cómo evaluar la calidad de las respuestas. Mostramos cómo configurar Azure AI Content Safety y usar el Azure AI Evaluation SDK para medir resultados de los modelos. Diapositivas: aka.ms/pythonia/diapositivas/calidad Código: ai-quality-safety-demos Guía de repositorio: video Python + IA: Tool Calling 📺 Ver grabación Exploramos el tool calling, base para crear agentes de IA. Definimos herramientas con esquemas JSON y funciones Python, manejamos llamadas paralelas y flujos iterativos. Diapositivas: aka.ms/pythonia/diapositivas/herramientas Código: python-openai-demos Guía de repositorio: video Python + IA: Agentes de IA 📺 Ver grabación Creamos agentes de IA con frameworks como el agent-framework de Microsoft y LangGraph, mostrando arquitecturas con múltiples herramientas, supervisores y flujos con intervención humana. Diapositivas: aka.ms/pythonia/diapositivas/agentes Código: python-ai-agents-demos Guía de repositorio: video Python + IA: Model Context Protocol (MCP) 📺 Ver grabación Cerramos la serie con MCP (Model Context Protocol), la tecnología más innovadora de 2025. Mostramos cómo usar el SDK de FastMCP en Python para crear un servidor MCP local, conectarlo a GitHub Copilot, construir un cliente MCP y conectar frameworks como LangGraph y agent-framework. También discutimos los riesgos de seguridad asociados. Diapositivas: aka.ms/pythonia/diapositivas/mcp Código: python-ai-mcp-demos Guía de repositorio: video Además Si tienen preguntas, por favor, en el canal #Espanol en nuestro Discord: https://aka.ms/pythonia/discord Todos los jueves tengo office hours: https://aka.ms/pythonia/horas Encuentra más tutoriales 100% en español sobre Python + AI en https://youtube.com/@lagpsIntroducing langchain-azure-storage: Azure Storage integrations for LangChain
We're excited to introduce langchain-azure-storage , the first official Azure Storage integration package built by Microsoft for LangChain 1.0. As part of its launch, we've built a new Azure Blob Storage document loader (currently in public preview) that improves upon prior LangChain community implementations. This new loader unifies both blob and container level access, simplifying loader integration. More importantly, it offers enhanced security through default OAuth 2.0 authentication, supports reliably loading millions to billions of documents through efficient memory utilization, and allows pluggable parsing, so you can leverage other document loaders to parse specific file formats. What are LangChain document loaders? A typical Retrieval‑Augmented Generation (RAG) pipeline follows these main steps: Collect source content (PDFs, DOCX, Markdown, CSVs) — often stored in Azure Blob Storage. Parse into text and associated metadata (i.e., represented as LangChain Document objects). Chunk + embed those documents and store in a vector store (e.g., Azure AI Search, Postgres pgvector, etc.). At query time, retrieve the most relevant chunks and feed them to an LLM as grounded context. LangChain document loaders make steps 1–2 turnkey and consistent so the rest of the stack (splitters, vector stores, retrievers) “just works”. See this LangChain RAG tutorial for a full example of these steps when building a RAG application in LangChain. How can the Azure Blob Storage document loader help? The langchain-azure-storage package offers the AzureBlobStorageLoader , a document loader that simplifies retrieving documents stored in Azure Blob Storage for use in a LangChain RAG application. Key benefits of the AzureBlobStorageLoader include: Flexible loading of Azure Storage blobs to LangChain Document objects. You can load blobs as documents from an entire container, a specific prefix within a container, or by blob names. Each document loaded corresponds 1:1 to a blob in the container. Lazy loading support for improved memory efficiency when dealing with large document sets. Documents can now be loaded one-at-a-time as you iterate over them instead of all at once. Automatically uses DefaultAzureCredential to enable seamless OAuth 2.0 authentication across various environments, from local development to Azure-hosted services. You can also explicitly pass your own credential (e.g., ManagedIdentityCredential , SAS token). Pluggable parsing. Easily customize how documents are parsed by providing your own LangChain document loader to parse downloaded blob content. Using the Azure Blob Storage document loader Installation To install the langchain-azure-storage package, run: pip install langchain-azure-storage Loading documents from a container To load all blobs from an Azure Blob Storage container as LangChain Document objects, instantiate the AzureBlobStorageLoader with the Azure Storage account URL and container name: from langchain_azure_storage.document_loaders import AzureBlobStorageLoader loader = AzureBlobStorageLoader( "https://<your-storage-account>.blob.core.windows.net/", "<your-container-name>" ) # lazy_load() yields one Document per blob for all blobs in the container for doc in loader.lazy_load(): print(doc.metadata["source"]) # The "source" metadata contains the full URL of the blob print(doc.page_content) # The page_content contains the blob's content decoded as UTF-8 text Loading documents by blob names To only load specific blobs as LangChain Document objects, you can additionally provide a list of blob names: from langchain_azure_storage.document_loaders import AzureBlobStorageLoader loader = AzureBlobStorageLoader( "https://<your-storage-account>.blob.core.windows.net/", "<your-container-name>", ["<blob-name-1>", "<blob-name-2>"] ) # lazy_load() yields one Document per blob for only the specified blobs for doc in loader.lazy_load(): print(doc.metadata["source"]) # The "source" metadata contains the full URL of the blob print(doc.page_content) # The page_content contains the blob's content decoded as UTF-8 text Pluggable parsing By default, loaded Document objects contain the blob's UTF-8 decoded content. To parse non-UTF-8 content (e.g., PDFs, DOCX, etc.) or chunk blob content into smaller documents, provide a LangChain document loader via the loader_factory parameter. When loader_factory is provided, the AzureBlobStorageLoader processes each blob with the following steps: Downloads the blob to a new temporary file Passes the temporary file path to the loader_factory callable to instantiate a document loader Uses that loader to parse the file and yield Document objects Cleans up the temporary file For example, below shows parsing PDF documents with the PyPDFLoader from the langchain-community package: from langchain_azure_storage.document_loaders import AzureBlobStorageLoader from langchain_community.document_loaders import PyPDFLoader # Requires langchain-community and pypdf packages loader = AzureBlobStorageLoader( "https://<your-storage-account>.blob.core.windows.net/", "<your-container-name>", prefix="pdfs/", # Only load blobs that start with "pdfs/" loader_factory=PyPDFLoader # PyPDFLoader will parse each blob as a PDF ) # Each blob is downloaded to a temporary file and parsed by PyPDFLoader instance for doc in loader.lazy_load(): print(doc.page_content) # Content parsed by PyPDFLoader (yields one Document per page in the PDF) This file path-based interface allows you to use any LangChain document loader that accepts a local file path as input, giving you access to a wide range of parsers for different file formats. Migrating from community document loaders to langchain-azure-storage If you're currently using AzureBlobStorageContainerLoader or AzureBlobStorageFileLoader from the langchain-community package, the new AzureBlobStorageLoader provides an improved alternative. This section provides step-by-step guidance for migrating to the new loader. Steps to migrate To migrate to the new Azure Storage document loader, make the following changes: Depend on the langchain-azure-storage package Update import statements from langchain_community.document_loaders to langchain_azure_storage.document_loaders . Change class names from AzureBlobStorageFileLoader and AzureBlobStorageContainerLoader to AzureBlobStorageLoader . Update document loader constructor calls to: Use an account URL instead of a connection string. Specify UnstructuredLoader as the loader_factory to continue to use Unstructured for parsing documents. Enable Microsoft Entra ID authentication in environment (e.g., run az login or configure managed identity) instead of using connection string authentication. Migration samples Below shows code snippets of what usage patterns look like before and after migrating from langchain-community to langchain-azure-storage : Before migration from langchain_community.document_loaders import AzureBlobStorageContainerLoader, AzureBlobStorageFileLoader container_loader = AzureBlobStorageContainerLoader( "DefaultEndpointsProtocol=https;AccountName=<account>;AccountKey=<account-key>;EndpointSuffix=core.windows.net", "<container>", ) file_loader = AzureBlobStorageFileLoader( "DefaultEndpointsProtocol=https;AccountName=<account>;AccountKey=<account-key>;EndpointSuffix=core.windows.net", "<container>", "<blob>" ) After migration from langchain_azure_storage.document_loaders import AzureBlobStorageLoader from langchain_unstructured import UnstructuredLoader # Requires langchain-unstructured and unstructured packages container_loader = AzureBlobStorageLoader( "https://<account>.blob.core.windows.net", "<container>", loader_factory=UnstructuredLoader # Only needed if continuing to use Unstructured for parsing ) file_loader = AzureBlobStorageLoader( "https://<account>.blob.core.windows.net", "<container>", "<blob>", loader_factory=UnstructuredLoader # Only needed if continuing to use Unstructured for parsing ) What's next? We're excited for you to try the new Azure Blob Storage document loader and would love to hear your feedback! Here are some ways you can help shape the future of langchain-azure-storage : Show support for interface stabilization - The document loader is currently in public preview and the interface may change in future versions based on feedback. If you'd like to see the current interface marked as stable, upvote the proposal PR to show your support. Report issues or suggest improvements - Found a bug or have an idea to make the document loaders better? File an issue on our GitHub repository. Propose new LangChain integrations - Interested in other ways to use Azure Storage with LangChain (e.g., checkpointing for agents, persistent memory stores, retriever implementations)? Create a feature request or write to us to let us know. Your input is invaluable in making langchain-azure-storage better for the entire community! Resources langchain-azure GitHub repository langchain-azure-storage PyPI package AzureBlobStorageLoader usage guide AzureBlobStorageLoader documentation referenceServerless MCP Agent with LangChain.js v1 — Burgers, Tools, and Traces 🍔
AI agents that can actually do stuff (not just chat) are the fun part nowadays, but wiring them cleanly into real APIs, keeping things observable, and shipping them to the cloud can get... messy. So we built a fresh end‑to‑end sample to show how to do it right with the brand new LangChain.js v1 and Model Context Protocol (MCP). In case you missed it, MCP is a recent open standard that makes it easy for LLM agents to consume tools and APIs, and LangChain.js, a great framework for building GenAI apps and agents, has first-class support for it. You can quickly get up speed with the MCP for Beginners course and AI Agents for Beginners course. This new sample gives you: A LangChain.js v1 agent that streams its result, along reasoning + tool steps An MCP server exposing real tools (burger menu + ordering) from a business API A web interface with authentication, sessions history, and a debug panel (for developers) A production-ready multi-service architecture Serverless deployment on Azure in one command ( azd up ) Yes, it’s a burger ordering system. Who doesn't like burgers? Grab your favorite beverage ☕, and let’s dive in for a quick tour! TL;DR key takeaways New sample: full-stack Node.js AI agent using LangChain.js v1 + MCP tools Architecture: web app → agent API → MCP server → burger API Runs locally with a single npm start , deploys with azd up Uses streaming (NDJSON) with intermediate tool + LLM steps surfaced to the UI Ready to fork, extend, and plug into your own domain / tools What will you learn here? What this sample is about and its high-level architecture What LangChain.js v1 brings to the table for agents How to deploy and run the sample How MCP tools can expose real-world APIs Reference links for everything we use GitHub repo LangChain.js docs Model Context Protocol Azure Developer CLI MCP Inspector Use case You want an AI assistant that can take a natural language request like “Order two spicy burgers and show me my pending orders” and: Understand intent (query menu, then place order) Call the right MCP tools in sequence, calling in turn the necessary APIs Stream progress (LLM tokens + tool steps) Return a clean final answer Swap “burgers” for “inventory”, “bookings”, “support tickets”, or “IoT devices” and you’ve got a reusable pattern! Sample overview Before we play a bit with the sample, let's have a look at the main services implemented here: Service Role Tech Agent Web App ( agent-webapp ) Chat UI + streaming + session history Azure Static Web Apps, Lit web components Agent API ( agent-api ) LangChain.js v1 agent orchestration + auth + history Azure Functions, Node.js Burger MCP Server ( burger-mcp ) Exposes burger API as tools over MCP (Streamable HTTP + SSE) Azure Functions, Express, MCP SDK Burger API ( burger-api ) Business logic: burgers, toppings, orders lifecycle Azure Functions, Cosmos DB Here's a simplified view of how they interact: There are also other supporting components like databases and storage not shown here for clarity. For this quickstart we'll only interact with the Agent Web App and the Burger MCP Server, as they are the main stars of the show here. LangChain.js v1 agent features The recent release of LangChain.js v1 is a huge milestone for the JavaScript AI community! It marks a significant shift from experimental tools to a production-ready framework. The new version doubles down on what’s needed to build robust AI applications, with a strong focus on agents. This includes first-class support for streaming not just the final output, but also intermediate steps like tool calls and agent reasoning. This makes building transparent and interactive agent experiences (like the one in this sample) much more straightforward. Quickstart Requirements GitHub account Azure account (free signup, or if you're a student, get free credits here) Azure Developer CLI Deploy and run the sample We'll use GitHub Codespaces for a quick zero-install setup here, but if you prefer to run it locally, check the README. Click on the following link or open it in a new tab to launch a Codespace: Create Codespace This will open a VS Code environment in your browser with the repo already cloned and all the tools installed and ready to go. Provision and deploy to Azure Open a terminal and run these commands: # Install dependencies npm install # Login to Azure azd auth login # Provision and deploy all resources azd up Follow the prompts to select your Azure subscription and region. If you're unsure of which one to pick, choose East US 2 . The deployment will take about 15 minutes the first time, to create all the necessary resources (Functions, Static Web Apps, Cosmos DB, AI Models). If you're curious about what happens under the hood, you can take a look at the main.bicep file in the infra folder, which defines the infrastructure as code for this sample. Test the MCP server While the deployment is running, you can run the MCP server and API locally (even in Codespaces) to see how it works. Open another terminal and run: npm start This will start all services locally, including the Burger API and the MCP server, which will be available at http://localhost:3000/mcp . This may take a few seconds, wait until you see this message in the terminal: 🚀 All services ready 🚀 When these services are running without Azure resources provisioned, they will use in-memory data instead of Cosmos DB so you can experiment freely with the API and MCP server, though the agent won't be functional as it requires a LLM resource. MCP tools The MCP server exposes the following tools, which the agent can use to interact with the burger ordering system: Tool Name Description get_burgers Get a list of all burgers in the menu get_burger_by_id Get a specific burger by its ID get_toppings Get a list of all toppings in the menu get_topping_by_id Get a specific topping by its ID get_topping_categories Get a list of all topping categories get_orders Get a list of all orders in the system get_order_by_id Get a specific order by its ID place_order Place a new order with burgers (requires userId , optional nickname ) delete_order_by_id Cancel an order if it has not yet been started (status must be pending , requires userId ) You can test these tools using the MCP Inspector. Open another terminal and run: npx -y @modelcontextprotocol/inspector Then open the URL printed in the terminal in your browser and connect using these settings: Transport: Streamable HTTP URL: http://localhost:3000/mcp Connection Type: Via Proxy (should be default) Click on Connect, then try listing the tools first, and run get_burgers tool to get the menu info. Test the Agent Web App After the deployment is completed, you can run the command npm run env to print the URLs of the deployed services. Open the Agent Web App URL in your browser (it should look like https://<your-web-app>.azurestaticapps.net ). You'll first be greeted by an authentication page, you can sign in either with your GitHub or Microsoft account and then you should be able to access the chat interface. From there, you can start asking any question or use one of the suggested prompts, for example try asking: Recommend me an extra spicy burger . As the agent processes your request, you'll see the response streaming in real-time, along with the intermediate steps and tool calls. Once the response is complete, you can also unfold the debug panel to see the full reasoning chain and the tools that were invoked: Tip: Our agent service also sends detailed tracing data using OpenTelemetry. You can explore these either in Azure Monitor for the deployed service, or locally using an OpenTelemetry collector. We'll cover this in more detail in a future post. Wrap it up Congratulations, you just finished spinning up a full-stack serverless AI agent using LangChain.js v1, MCP tools, and Azure’s serverless platform. Now it's your turn to dive in the code and extend it for your use cases! 😎 And don't forget to azd down once you're done to avoid any unwanted costs. Going further This was just a quick introduction to this sample, and you can expect more in-depth posts and tutorials soon. Since we're in the era of AI agents, we've also made sure that this sample can be explored and extended easily with code agents like GitHub Copilot. We even built a custom chat mode to help you discover and understand the codebase faster! Check out the Copilot setup guide in the repo to get started. You can quickly get up speed with the MCP for Beginners course and AI Agents for Beginners course. If you like this sample, don't forget to star the repo ⭐️! You can also join us in the Azure AI community Discord to chat and ask any questions. Happy coding and burger ordering! 🍔LangChain v1 is now generally available!
Today LangChain v1 officially launches and marks a new era for the popular AI agent library. The new version ushers in a more streamlined, and extensible foundation for building agentic LLM applications. In this post we'll breakdown what’s new, what changed, and what “general availability” means in practice. Join Microsoft Developer Advocates, Marlene Mhangami and Yohan Lasorsa, to see live demos of the new API and find out more about what JavaScript and Python developers need to know about v1. Register for this event here. Why v1? The Motivation Behind the Redesign The number of abstractions in LangChain had grown over the years to include chains, agents, tools, wrappers, prompt helpers and more, which, while powerful, introduced complexity and fragmentation. As model APIs evolve (multimodal inputs, richer structured output, tool-calling semantics), LangChain needed a cleaner, more consistent core to ensure production ready stability. In v1: All existing chains and agent abstractions in the old LangChain are deprecated; they are replaced by a single high-level agent abstraction built on LangGraph internals. LangGraph becomes the foundational runtime for durable, stateful, orchestrated execution. LangChain now emphasizes being the “fast path to agents” that doesn’t hide but builds upon LangGraph. The internal message format has been upgraded to support standard content blocks (e.g. text, reasoning, citations, tool calls) across model providers, decoupling “content” from raw strings. Namespace cleanup: the langchain package now focuses tightly on core abstractions (agents, models, messages, tools), while legacy patterns are moved into langchain-classic (or equivalents). What’s New & Noteworthy for Developers Here are key changes developers should pay attention to: 1. create_agent becomes the default API The create_agent function is now the idiomatic way to spin up agents in v1. It replaces older constructs (e.g. create_react_agent) with a clearer, more modular API. You can also now compose middleware around model calls, tool calls, before/after hooks, error handling, etc. 2. Standard content blocks & normalized message model One of LangChain's greatest stregnth's is it's model agnosticism. Content blocks move to standardize all outputs, so developers know exactly what to expect regardless of the model they are using. Responses from models are no longer opaque strings. Instead, they carry structured `content_blocks` which classify parts of the output (e.g. “text”, “reasoning”, “citation”, “tool_call”). 3. Multimodal and richer model inputs / outputs LangChain continues to support more than just text-based interactions, but in a more comprehensive way in v1. Models can accept and return files, images, video, etc., and the message format reflects this flexibility. This upgrade prepares us well for the next generation of models with mixed modalities (vision, audio, etc.). 4. Middleware hooks Because create_agent is designed as a pluggable pipeline, developers can now inject logic before/after model calls, before tool calls and more. New middleware such as 'human in the loop' and 'summarization' middleware have been added. This is a feature of the new package that I am most excited about it! Even with the simplified agents API, this option provides more room to customize workflows! Developers can try pre-built middleware or make their own. 5. Simplified, leaner namespace Many formerly top-level modules or helper classes have been removed or relocated to langchain-classic (or similarly stamped “legacy”) to declutter the main API surface. A migration guide is available to help projects transition from v0 to v1. While v1 is now the main line, older v0 is still documented and maintained for compatibility. What “General Availability” Means (and Doesn’t) v1 is production-ready, after testing the alpha version. The stable v0 release line remains supported for those unwilling or unable to migrate immediately. Breaking changes in public APIs will be accompanied by version bumps (i.e. minor version increments) and deprecation notices. The roadmap anticipates minor versions every 2–3 months (with patch releases more frequently). Because the field of LLM applications is evolving rapidly, the team expects continued iterations in v1—even in GA mode—with users encouraged to surface feedback, file issues, and adopt the migration path. (This is in line with the philosophy stated in docs.) Developer Callouts & Suggested Steps Some things we recommend for developers to do to get started with v1: Try the new API Now! LangChain Azure AI and Azure OpenAI have migrated to LangChain v1 and are ready to test! Learn more about using LangChain and Azure AI: Python: https://docs.langchain.com/oss/python/integrations/providers/azure_ai JavaScript: https://docs.langchain.com/oss/javascript/integrations/providers/microsoft Join us for a Live Stream on Wednesday 22 October 2025 Join Microsoft Developer Advocates Marlene Mhangami and Yohan Lasorsa for a livestream this Wednesday to see live demos and find out more about what JavaScript and Python developers need to know about v1. Register for this event here.1.4KViews0likes0Comments