javascript
116 TopicsHow to Integrate Playwright MCP for AI-Driven Test Automation
Test automation has come a long way, from scripted flows to self-healing and now AI-driven testing. With the introduction of Model Context Protocol (MCP), Playwright can now interact with AI models and external tools to make smarter testing decisions. This guide walks you through integrating MCP with Playwright in VSCode, starting from the basics, enabling you to build smarter, adaptive tests today. What Is Playwright MCP? Playwright: An open-source framework for web testing and automation. It supports multiple browsers (Chromium, Firefox, and WebKit) and offers robust features like auto-wait, capturing screenshots, along with some great tooling like Codegen, Trace Viewer. MCP (Model Context Protocol): A protocol that enables external tools to communicate with AI models or services in a structured, secure way. By combining Playwright with MCP, you unlock: AI-assisted test generation. Dynamic test data. Smarter debugging and adaptive workflows. Why Integrate MCP with Playwright? AI-powered test generation: Reduce manual scripting. Dynamic context awareness: Tests adapt to real-time data. Improved debugging: AI can suggest fixes for failing tests. Smarter locator selection: AI helps pick stable, reliable selectors to reduce flaky tests. Natural language instructions: Write or trigger tests using plain English prompts. Getting Started in VS Code Prerequisites Node.js Download: nodejs.org Minimum version: v18.0.0 or higher (recommended: latest LTS) Check version: node --version Playwright Install Playwright: npm install @playwright/test Step 1: Create Project Folder mkdir playwrightMCP-demo cd playwrightMCP-demo Step 2: Initialize Project npm init playwright@latest Step 3: Install MCP Server for VS Code Navigate to GitHub - microsoft/playwright-mcp: Playwright MCP server and click install server for VS Code Search for 'MCP: Open user configuration' (type ‘>mcp’ in the search box) You will see a file mcp.json is created in your user -> app data folder, which is having the server details. { "servers": { "playwright": { "command": "npx", "args": [ "@playwright/mcp@latest" ], "type": "stdio" } }, "inputs": [] } Alternatively, install an MCP server directly GitHub MCP server registry using the Extensions view in VS Code. From GitHub MCP server registry Verify installation: Open Copilot Chat → select Agent Mode → click Configure Tools → confirm microsoft/playwright-mcp appears in the list. Step 4: Create a Simple Test Using MCP Once your project and MCP setup are ready in VS Code, you can create a simple test that demonstrates MCP’s capabilities. MCP can help in multiple scenarios, below is the example for Test Generation using AI: Scenario: AI-Assisted Test Generation- Use natural language prompts to generate Playwright tests automatically. Test Scenario - Validate that a user can switch the Playwright documentation language dropdown to Python, search for “Frames,” and navigate to the Frames documentation page. Confirm that the page heading correctly displays “Frames.” Sample Prompt to Use in VS Code (Copilot Agent Mode):Create a Playwright automated test in JavaScript that verifies navigation to the 'Frames' documentation page following below steps and be more specific about locators to avoid strict mode violation error Navigate to : Playwright documentation select “Python” from the dropdown options, labelled “Node.js” Type the keyword “Frames” into the search box. Click the search result for the Frames documentation page Verify that the page header reads “Frames”. Log success or provide a failure message with details. Copilot will generate the test automatically in your tests folder Step 5: Run Test npx playwright test Conclusion Integrating Playwright with MCP in VS Code helps you build smarter, adaptive tests without adding complexity. Start small, follow best practices, and scale as you grow. Note - Installation steps may vary depending on your environment. Refer to MCP Registry · GitHub for the latest instructions.1.6KViews0likes3CommentsReal‑Time AI Streaming with Azure OpenAI and SignalR
TL;DR We’ll build a real-time AI app where Azure OpenAI streams responses and SignalR broadcasts them live to an Angular client. Users see answers appear incrementally just like ChatGPT while Azure SignalR Service handles scale. You’ll learn the architecture, streaming code, Angular integration, and optional enhancements like typing indicators and multi-agent scenarios. Why This Matters Modern users expect instant feedback. Waiting for a full AI response feels slow and breaks engagement. Streaming responses: Reduces perceived latency: Users see content as it’s generated. Improves UX: Mimics ChatGPT’s typing effect. Keeps users engaged: Especially for long-form answers. Scales for enterprise: Azure SignalR Service handles thousands of concurrent connections. What you’ll build A SignalR Hub that calls Azure OpenAI with streaming enabled and forwards partial output to clients as it arrives. An Angular client that connects over WebSockets/SSE to the hub and renders partial content with a typing indicator. An optional Azure SignalR Service layer for scalable connection management (thousands to millions of long‑lived connections). References: SignalR hosting & scale; Azure SignalR Service concepts. Architecture The hub calls Azure OpenAI with streaming enabled (await foreach over updates) and broadcasts partials to clients. Azure SignalR Service (optional) offloads connection scale and removes sticky‑session complexity in multi‑node deployments. References: Streaming code pattern; scale/ARR affinity; Azure SignalR integration. Prerequisites Azure OpenAI resource with a deployed model (e.g., gpt-4o or gpt-4o-mini) .NET 8 API + ASP.NET Core SignalR backend Angular 16+ frontend (using microsoft/signalr) Step‑by‑Step Implementation 1) Backend: ASP.NET Core + SignalR Install packages dotnet add package Microsoft.AspNetCore.SignalR dotnet add package Azure.AI.OpenAI --prerelease dotnet add package Azure.Identity dotnet add package Microsoft.Extensions.AI dotnet add package Microsoft.Extensions.AI.OpenAI --prerelease # Optional (managed scale): Azure SignalR Service dotnet add package Microsoft.Azure.SignalR Using DefaultAzureCredential (Entra ID) avoids storing raw keys in code and is the recommended auth model for Azure services. Program.cs var builder = WebApplication.CreateBuilder(args); builder.Services.AddSignalR(); // To offload connection management to Azure SignalR Service, uncomment: // builder.Services.AddSignalR().AddAzureSignalR(); builder.Services.AddSingleton<AiStreamingService>(); var app = builder.Build(); app.MapHub<ChatHub>("/chat"); app.Run(); AiStreamingService.cs - streams content from Azure OpenAI using Microsoft.Extensions.AI; using Azure.AI.OpenAI; using Azure.Identity; public class AiStreamingService { private readonly IChatClient _chatClient; public AiStreamingService(IConfiguration config) { var endpoint = new Uri(config["AZURE_OPENAI_ENDPOINT"]!); var deployment = config["AZURE_OPENAI_DEPLOYMENT"]!; // e.g., "gpt-4o-mini" var azureClient = new AzureOpenAIClient(endpoint, new DefaultAzureCredential()); _chatClient = azureClient.GetChatClient(deployment).AsIChatClient(); } public async IAsyncEnumerable<string> StreamReplyAsync(string userMessage) { var messages = new List<ChatMessage> { ChatMessage.CreateSystemMessage("You are a helpful assistant."), ChatMessage.CreateUserMessage(userMessage) }; await foreach (var update in _chatClient.CompleteChatStreamingAsync(messages)) { // Only text parts; ignore tool calls/annotations var chunk = string.Join("", update.Content .Where(p => p.Kind == ChatMessageContentPartKind.Text) .Select(p => ((TextContent)p).Text)); if (!string.IsNullOrEmpty(chunk)) yield return chunk; } } } Modern .NET AI extensions (Microsoft.Extensions.AI) expose a unified streaming pattern via CompleteChatStreamingAsync. ChatHub.cs - pushes partials to the caller using Microsoft.AspNetCore.SignalR; public class ChatHub : Hub { private readonly AiStreamingService _ai; public ChatHub(AiStreamingService ai) => _ai = ai; // Client calls: connection.invoke("AskAi", prompt) public async Task AskAi(string prompt) { var messageId = Guid.NewGuid().ToString("N"); await Clients.Caller.SendAsync("typing", messageId, true); await foreach (var partial in _ai.StreamReplyAsync(prompt)) { await Clients.Caller.SendAsync("partial", messageId, partial); } await Clients.Caller.SendAsync("typing", messageId, false); await Clients.Caller.SendAsync("completed", messageId); } } 2) Frontend: Angular client with microsoft/signalr Install the SignalR client npm i microsoft/signalr Create a SignalR service (Angular) // src/app/services/ai-stream.service.ts import { Injectable } from '@angular/core'; import * as signalR from '@microsoft/signalr'; import { BehaviorSubject, Observable } from 'rxjs'; @Injectable({ providedIn: 'root' }) export class AiStreamService { private connection?: signalR.HubConnection; private typing$ = new BehaviorSubject<boolean>(false); private partial$ = new BehaviorSubject<string>(''); private completed$ = new BehaviorSubject<boolean>(false); get typing(): Observable<boolean> { return this.typing$.asObservable(); } get partial(): Observable<string> { return this.partial$.asObservable(); } get completed(): Observable<boolean> { return this.completed$.asObservable(); } async start(): Promise<void> { this.connection = new signalR.HubConnectionBuilder() .withUrl('/chat') // same origin; use absolute URL if CORS .withAutomaticReconnect() .configureLogging(signalR.LogLevel.Information) .build(); this.connection.on('typing', (_id: string, on: boolean) => this.typing$.next(on)); this.connection.on('partial', (_id: string, text: string) => { // Append incremental content this.partial$.next((this.partial$.value || '') + text); }); this.connection.on('completed', (_id: string) => this.completed$.next(true)); await this.connection.start(); } async ask(prompt: string): Promise<void> { // Reset state per request this.partial$.next(''); this.completed$.next(false); await this.connection?.invoke('AskAi', prompt); } } Angular component // src/app/components/ai-chat/ai-chat.component.ts import { Component, OnInit } from '@angular/core'; import { AiStreamService } from '../../services/ai-stream.service'; @Component({ selector: 'app-ai-chat', templateUrl: './ai-chat.component.html', styleUrls: ['./ai-chat.component.css'] }) export class AiChatComponent implements OnInit { prompt = ''; output = ''; typing = false; done = false; constructor(private ai: AiStreamService) {} async ngOnInit() { await this.ai.start(); this.ai.typing.subscribe(on => this.typing = on); this.ai.partial.subscribe(text => this.output = text); this.ai.completed.subscribe(done => this.done = done); } async send() { this.output = ''; this.done = false; await this.ai.ask(this.prompt); } } HTML Template <!-- src/app/components/ai-chat/ai-chat.component.html --> <div class="chat"> <div class="prompt"> <input [(ngModel)]="prompt" placeholder="Ask me anything…" /> <button (click)="send()">Send</button> </div> <div class="response"> <pre>{{ output }}</pre> <div class="typing" *ngIf="typing">Assistant is typing…</div> <div class="done" *ngIf="done">✓ Completed</div> </div> </div> Streaming modes, content filters, and UX Azure OpenAI streaming interacts with content filtering in two ways: Default streaming: The service buffers output into content chunks and runs content filters before each chunk is emitted; you still stream, but not necessarily token‑by‑token. Asynchronous Filter (optional): The service returns token‑level updates immediately and runs filters asynchronously. You get ultra‑smooth streaming but must handle delayed moderation signals (e.g., redaction or halting the stream). Best practices Append partials in small batches client‑side to avoid DOM thrash; finalize formatting on "completed". Log full messages server‑side only after completion to keep histories consistent (mirrors agent frameworks). Security & compliance Auth: Prefer Microsoft Entra ID (DefaultAzureCredential) to avoid key sprawl; use RBAC and Managed Identities where possible. Secrets: Store Azure SignalR connection strings in Key Vault and rotate periodically; never hardcode. CORS & cross‑domain: When hosting frontend and hub on different origins, configure CORS and use absolute URLs in withUrl(...). Connection management & scaling tips Persistent connection load: SignalR consumes TCP resources; separate heavy real‑time workloads or use Azure SignalR to protect other apps. Sticky sessions (self‑hosted): Required in most multi‑server scenarios unless WebSockets‑only + SkipNegotiation applies; Azure SignalR removes this requirement. Learn more AI‑Powered Group Chat sample (ASP.NET Core): Azure OpenAI .NET client (auth & streaming): SignalR JavaScript ClientServerless MCP Agent with LangChain.js v1 — Burgers, Tools, and Traces 🍔
AI agents that can actually do stuff (not just chat) are the fun part nowadays, but wiring them cleanly into real APIs, keeping things observable, and shipping them to the cloud can get... messy. So we built a fresh end‑to‑end sample to show how to do it right with the brand new LangChain.js v1 and Model Context Protocol (MCP). In case you missed it, MCP is a recent open standard that makes it easy for LLM agents to consume tools and APIs, and LangChain.js, a great framework for building GenAI apps and agents, has first-class support for it. You can quickly get up speed with the MCP for Beginners course and AI Agents for Beginners course. This new sample gives you: A LangChain.js v1 agent that streams its result, along reasoning + tool steps An MCP server exposing real tools (burger menu + ordering) from a business API A web interface with authentication, sessions history, and a debug panel (for developers) A production-ready multi-service architecture Serverless deployment on Azure in one command ( azd up ) Yes, it’s a burger ordering system. Who doesn't like burgers? Grab your favorite beverage ☕, and let’s dive in for a quick tour! TL;DR key takeaways New sample: full-stack Node.js AI agent using LangChain.js v1 + MCP tools Architecture: web app → agent API → MCP server → burger API Runs locally with a single npm start , deploys with azd up Uses streaming (NDJSON) with intermediate tool + LLM steps surfaced to the UI Ready to fork, extend, and plug into your own domain / tools What will you learn here? What this sample is about and its high-level architecture What LangChain.js v1 brings to the table for agents How to deploy and run the sample How MCP tools can expose real-world APIs Reference links for everything we use GitHub repo LangChain.js docs Model Context Protocol Azure Developer CLI MCP Inspector Use case You want an AI assistant that can take a natural language request like “Order two spicy burgers and show me my pending orders” and: Understand intent (query menu, then place order) Call the right MCP tools in sequence, calling in turn the necessary APIs Stream progress (LLM tokens + tool steps) Return a clean final answer Swap “burgers” for “inventory”, “bookings”, “support tickets”, or “IoT devices” and you’ve got a reusable pattern! Sample overview Before we play a bit with the sample, let's have a look at the main services implemented here: Service Role Tech Agent Web App ( agent-webapp ) Chat UI + streaming + session history Azure Static Web Apps, Lit web components Agent API ( agent-api ) LangChain.js v1 agent orchestration + auth + history Azure Functions, Node.js Burger MCP Server ( burger-mcp ) Exposes burger API as tools over MCP (Streamable HTTP + SSE) Azure Functions, Express, MCP SDK Burger API ( burger-api ) Business logic: burgers, toppings, orders lifecycle Azure Functions, Cosmos DB Here's a simplified view of how they interact: There are also other supporting components like databases and storage not shown here for clarity. For this quickstart we'll only interact with the Agent Web App and the Burger MCP Server, as they are the main stars of the show here. LangChain.js v1 agent features The recent release of LangChain.js v1 is a huge milestone for the JavaScript AI community! It marks a significant shift from experimental tools to a production-ready framework. The new version doubles down on what’s needed to build robust AI applications, with a strong focus on agents. This includes first-class support for streaming not just the final output, but also intermediate steps like tool calls and agent reasoning. This makes building transparent and interactive agent experiences (like the one in this sample) much more straightforward. Quickstart Requirements GitHub account Azure account (free signup, or if you're a student, get free credits here) Azure Developer CLI Deploy and run the sample We'll use GitHub Codespaces for a quick zero-install setup here, but if you prefer to run it locally, check the README. Click on the following link or open it in a new tab to launch a Codespace: Create Codespace This will open a VS Code environment in your browser with the repo already cloned and all the tools installed and ready to go. Provision and deploy to Azure Open a terminal and run these commands: # Install dependencies npm install # Login to Azure azd auth login # Provision and deploy all resources azd up Follow the prompts to select your Azure subscription and region. If you're unsure of which one to pick, choose East US 2 . The deployment will take about 15 minutes the first time, to create all the necessary resources (Functions, Static Web Apps, Cosmos DB, AI Models). If you're curious about what happens under the hood, you can take a look at the main.bicep file in the infra folder, which defines the infrastructure as code for this sample. Test the MCP server While the deployment is running, you can run the MCP server and API locally (even in Codespaces) to see how it works. Open another terminal and run: npm start This will start all services locally, including the Burger API and the MCP server, which will be available at http://localhost:3000/mcp . This may take a few seconds, wait until you see this message in the terminal: 🚀 All services ready 🚀 When these services are running without Azure resources provisioned, they will use in-memory data instead of Cosmos DB so you can experiment freely with the API and MCP server, though the agent won't be functional as it requires a LLM resource. MCP tools The MCP server exposes the following tools, which the agent can use to interact with the burger ordering system: Tool Name Description get_burgers Get a list of all burgers in the menu get_burger_by_id Get a specific burger by its ID get_toppings Get a list of all toppings in the menu get_topping_by_id Get a specific topping by its ID get_topping_categories Get a list of all topping categories get_orders Get a list of all orders in the system get_order_by_id Get a specific order by its ID place_order Place a new order with burgers (requires userId , optional nickname ) delete_order_by_id Cancel an order if it has not yet been started (status must be pending , requires userId ) You can test these tools using the MCP Inspector. Open another terminal and run: npx -y @modelcontextprotocol/inspector Then open the URL printed in the terminal in your browser and connect using these settings: Transport: Streamable HTTP URL: http://localhost:3000/mcp Connection Type: Via Proxy (should be default) Click on Connect, then try listing the tools first, and run get_burgers tool to get the menu info. Test the Agent Web App After the deployment is completed, you can run the command npm run env to print the URLs of the deployed services. Open the Agent Web App URL in your browser (it should look like https://<your-web-app>.azurestaticapps.net ). You'll first be greeted by an authentication page, you can sign in either with your GitHub or Microsoft account and then you should be able to access the chat interface. From there, you can start asking any question or use one of the suggested prompts, for example try asking: Recommend me an extra spicy burger . As the agent processes your request, you'll see the response streaming in real-time, along with the intermediate steps and tool calls. Once the response is complete, you can also unfold the debug panel to see the full reasoning chain and the tools that were invoked: Tip: Our agent service also sends detailed tracing data using OpenTelemetry. You can explore these either in Azure Monitor for the deployed service, or locally using an OpenTelemetry collector. We'll cover this in more detail in a future post. Wrap it up Congratulations, you just finished spinning up a full-stack serverless AI agent using LangChain.js v1, MCP tools, and Azure’s serverless platform. Now it's your turn to dive in the code and extend it for your use cases! 😎 And don't forget to azd down once you're done to avoid any unwanted costs. Going further This was just a quick introduction to this sample, and you can expect more in-depth posts and tutorials soon. Since we're in the era of AI agents, we've also made sure that this sample can be explored and extended easily with code agents like GitHub Copilot. We even built a custom chat mode to help you discover and understand the codebase faster! Check out the Copilot setup guide in the repo to get started. You can quickly get up speed with the MCP for Beginners course and AI Agents for Beginners course. If you like this sample, don't forget to star the repo ⭐️! You can also join us in the Azure AI community Discord to chat and ask any questions. Happy coding and burger ordering! 🍔AI Career Navigator — Empowering Job Seekers with Azure OpenAI
AI Career Navigator is more than just a project — it’s a mission to make career growth accessible, intelligent, and human. Powered by Azure OpenAI, it transforms uncertainty into direction and effort into achievement. Author: Aryan Jaiswal — Gold Microsoft Learn Student Ambassador Reviewer: Julia Muiruri (Microsoft)296Views2likes0CommentsIntl.DateTimeFormat missing Welsh (cy / cy-GB) support
Hi! I'm a developer at a company who is currently working with the Welsh government and have come across an issue with Microsoft Edge. When using the Intl.DateTimeFormat JavaScript API in Microsoft Edge, there is no support for cy / cy-GB. Intl.DateTimeFormat.supportedLocalesOf("cy-GB"); // Outputs [] Intl.DateTimeFormat.supportedLocalesOf("cy"); // Outputs [] If a developer tries to do any date formatting with the locale set to cy or cy-GB, only English will be returned. Out of curiosity I tried this with and without the Windows language being set to Welsh / Cymraeg. The result was the same (no support). Mozilla Firefox and Apple Safari have full support. Interestingly Google Chrome does not (perhaps shared issue / limitation of Chromium). Unfortunately this causes a bit of a significant challenge. Any website / web app developed for the Welsh government or related bodies is legally required to be available in Welsh. Due to the popularity of Microsoft Edge, particularly with governments and institutions, developers cannot reliably use the Intl based APIs. The only known workarounds are: To use a formatting library instead of Intl Avoids using the ECMAScript APIs designed to solve problems for developers Additional client side and developer maintenance costs To include very bulky polyfills Popular polyfill, locale and timezone information exceeds 2MB of data for the client to load per website / web app. Unsure if this a bug or locale which is yet to be supported. Adding support for Welsh would be a huge step forward and greatly appreciated by developers working with government bodies. Even better would be adding support for the same languages Windows supports!69Views0likes0CommentsWeb Development for Beginners: Learners Matchmaking!
Hey Everyone, Thanks for joining the Web Development for Beginners course! Learning and building together is the goal of this course. Post the below information and I will make sure to connect you with similar students so you can build together in this 8 weeks! Name Experience Level Location / Timezone Where can we find you? LinkedIn / Twitter / Telegram / etc Example: Korey Stegared-Pace I love JavaScript Europe/Sweden Twitter - koreyspace / LinkedIn: Korey Stegared-Pace6.9KViews0likes45CommentsWhat is GitHub Codespaces and how can Students access it for free?
GitHub Codespaces is a new service that is free for anyone to develop with powerful environments using Visual Studio Code. In this post, we'll cover how you can make use of this new technology and take advantage of its most powerful features.47KViews5likes6CommentsJS AI Build‑a‑thon: Wrapping Up an Epic June 2025!
After weeks of building, testing, and learning — we’re officially wrapping up the first-ever JS AI Build-a-thon 🎉. This wasn't your average coding challenge. This was a hands-on journey where JavaScript and TypeScript developers dove deep into real-world AI concepts — from local GenAI prototyping to building intelligent agents and deploying production-ready apps. Whether you joined from the start or hopped on midway, you built something that matters — and that’s worth celebrating. Replay the Journey No worries if you joined late or want to revisit any part of the journey. The JS AI Build-a-thon was designed to let you learn at your own pace, so whether you're starting now or polishing up your final project, here’s your complete quest map: Build-a-thon set up guide: https://aka.ms/JSAIBuildathonSetup Quest 1: 🔧 Build your first GenAI app locally with GitHub Models 👉🏽 https://aka.ms/JSAIBuildathonQuest1 Quest 2: ☁️ Move your AI prototype to Azure AI Foundry 👉🏽 https://aka.ms/JSAIBuildathonQuest Quest 3: 🎨 Add a chat UI using Vite + Lit 👉🏽 https://aka.ms/JSAIBuildathonQuest3 Quest 4: 📄 Enhance your app with RAG (Chat with Your Data) 👉🏽 https://aka.ms/JSAIBuildathonQuest4 Quest 5: 🧠 Add memory and context to your AI app 👉🏽 https://aka.ms/JSAIBuildathonQuest5 Quest 6: ⚙️ Build your first AI Agent using AI Foundry 👉🏽 https://aka.ms/JSAIBuildathonQuest6 Quest 7: 🧩 Equip your agent with tools from an MCP server 👉🏽 https://aka.ms/JSAIBuildathonQuest7 Quest 8: 💬 Ground your agent with real-time search using Bing 👉🏽 https://aka.ms/JSAIBuildathonQuest8 Quest 9: 🚀 Build a real-world AI project with full-stack templates 👉🏽 https://aka.ms/JSAIBuildathonQuest9 Link to our space in the AI Discord Community: https://aka.ms/JSAIonDiscord Project Submission Guidelines 📌 Quest 9 is where it all comes together. Participants chose a problem, picked a template, customized it, submitted it, and rallied their community for support! 🏅 Claim Your Badge! Whether you completed select quests or went all the way, we celebrate your learning. If you participated in the June 2025 JS AI Build-a-thon, make sure to Submit the Participation Form to receive your participation badge recognizing your commitment to upskilling in AI with JavaScript/ TypeScript. What’s Next? We’re not done. In fact, we’re just getting started. We’re already cooking up JS AI Build-a-thon v2, which will introduce: Running everything locally with Foundry Local Real-world RAG with vector databases Advanced agent patterns with remote MCPs And much more based on your feedback Want to shape what comes next? Drop your ideas in the participation form and in our Discord. In the meantime, add these resources to your JavaScript + AI Dev Pack: 🔗 Microsoft for JavaScript developers 📚 Generative AI for Beginners with JavaScript Wrap-Up This build-a-thon showed what’s possible when developers are empowered to learn by doing. You didn’t just follow tutorials — you shipped features, connected services, and created working AI experiences. We can’t wait to see what you build next. 👉 Bookmark the repo 👉 Join the community on Join the Azure AI Foundry Discord Server! 👉 Stay building Until next time — keep coding, keep shipping!Quest 5 - I want to add conversation memory to my app
In this quest, you’ll explore how to build GenAI apps using a modern JavaScript AI framework, LangChain.js. LangChain.js helps you orchestrate prompts, manage memory, and build multi-step AI workflows all while staying in your favorite language. Using LangChain.js you will make your GenAI chat app feel truly personal by teaching it to remember. In this quest, you’ll upgrade your AI prototype with conversation memory, allowing it to recall previous interactions making the conversation flow more naturally and human-like. 👉 Want to catch up on the full program or grab more quests? https://aka.ms/JSAIBuildathon 💬 Got questions or want to hang with other builders? Join us on Discord — head to the #js-ai-build-a-thon channel. 🔧 What You’ll Build A smarter, context-aware chat backend that: Remembers user conversations across multiple exchanges (e.g., knowing "Terry" after you introduced yourself as Terry) Maintains session-specific memory so each chat thread feels consistent and coherent Uses LangChain.js abstractions to streamline state management. 🚀 What You’ll Need ✅ A GitHub account ✅ Visual Studio Code ✅ Node.js ✅ A working chat app from previous quests (UI + Azure-based chat endpoint) 🛠️ Concepts You’ll Explore Integrating LangChain.js Learn how LangChain.js simplifies building AI-powered web applications by providing a standard interface to connect your backend with Azure’s language models. You’ll see how using this framework decouples your code and unlocks advanced features. Adding Conversation Memory Understand why memory matters in chatbots. Explore how conversation memory lets your app remember previous user messages within each session enabling more context-aware and coherent conversations. Session-based Message History Implement session-specific chat histories using LangChain’s memory modules (ChatMessageHistory and BufferMemory). Each user or session gets its own history, so previous questions and answers inform future responses without manual log management. Seamless State Management Experience how LangChain handles chat logs and memory behind the scenes, freeing you from manually stitching together chat history or juggling context with every prompt. 📖 Bonus Resources to Go Deeper Exploring Generative AI in App Development: LangChain.js and Azure: a video introduction to LangChain.js and how you can build a project with LangChain.js and Azure 🦜️🔗 Langchain: the official LangChain.js documentation. GitHub - Azure-Samples/serverless-chat-langchainjs: Build your own serverless AI Chat with Retrieval-Augmented-Generation using LangChain.js, TypeScript and Azure: A GitHub sample that helps you build your own serverless AI Chat with Retrieval-Augmented-Generation using LangChain.js, TypeScript and Azure GitHub - Azure-Samples/langchainjs-quickstart-demo: Build a generative AI application using LangChain.js, from local to Azure: A GitHub sample that helps you build a generative AI application using LangChain.js, from local to Azure. Microsoft | 🦜️🔗 Langchain Official LangChain documentation on all functionalities related to Microsoft and Microsoft Azure. Quest 4 - I want to connect my AI prototype to external data using RAG | Microsoft Community Hub a link to the previous quest instructions.