javascript
35 TopicsJS AI Build-a-thon Setup in 5 Easy Steps
đĽ TL;DR â Youâre 5 Steps Away from an AI Adventure Set up your project repo, follow the quests, build cool stuff, and level up. Everythingâs automated, community-backed, and designed to help you actually learn AI â using the skills you already have. Letâs build the future. One quest at a time. đ Join the Build-a-thon | Chat on DiscordUse AI for Free with GitHub Models and TypeScript! đ¸đ¸đ¸
Learn how to use AI for free with GitHub Models! Test models like GPT-4o without paying for APIs or setting up infrastructure. This step-by-step guide shows how to integrate GitHub Models with TypeScript in the Microblog AI Remix project. Start exploring AI for free today!RealâTime AI Streaming with Azure OpenAI and SignalR
TL;DR Weâll build a real-time AI app where Azure OpenAI streams responses and SignalR broadcasts them live to an Angular client. Users see answers appear incrementally just like ChatGPT while Azure SignalR Service handles scale. Youâll learn the architecture, streaming code, Angular integration, and optional enhancements like typing indicators and multi-agent scenarios. Why This Matters Modern users expect instant feedback. Waiting for a full AI response feels slow and breaks engagement. Streaming responses: Reduces perceived latency: Users see content as itâs generated. Improves UX: Mimics ChatGPTâs typing effect. Keeps users engaged: Especially for long-form answers. Scales for enterprise: Azure SignalR Service handles thousands of concurrent connections. What youâll build A SignalR Hub that calls Azure OpenAI with streaming enabled and forwards partial output to clients as it arrives. An Angular client that connects over WebSockets/SSE to the hub and renders partial content with a typing indicator. An optional Azure SignalR Service layer for scalable connection management (thousands to millions of longâlived connections). References: SignalR hosting & scale; Azure SignalR Service concepts. Architecture The hub calls Azure OpenAI with streaming enabled (await foreach over updates) and broadcasts partials to clients. Azure SignalR Service (optional) offloads connection scale and removes stickyâsession complexity in multiânode deployments. References: Streaming code pattern; scale/ARR affinity; Azure SignalR integration. Prerequisites Azure OpenAI resource with a deployed model (e.g., gpt-4o or gpt-4o-mini) .NET 8 API + ASP.NET Core SignalR backend Angular 16+ frontend (using microsoftâ/signalr) StepâbyâStep Implementation 1) Backend: ASP.NET Core + SignalR Install packages dotnet add package Microsoft.AspNetCore.SignalR dotnet add package Azure.AI.OpenAI --prerelease dotnet add package Azure.Identity dotnet add package Microsoft.Extensions.AI dotnet add package Microsoft.Extensions.AI.OpenAI --prerelease # Optional (managed scale): Azure SignalR Service dotnet add package Microsoft.Azure.SignalR Using DefaultAzureCredential (Entra ID) avoids storing raw keys in code and is the recommended auth model for Azure services. Program.cs var builder = WebApplication.CreateBuilder(args); builder.Services.AddSignalR(); // To offload connection management to Azure SignalR Service, uncomment: // builder.Services.AddSignalR().AddAzureSignalR(); builder.Services.AddSingleton<AiStreamingService>(); var app = builder.Build(); app.MapHub<ChatHub>("/chat"); app.Run(); AiStreamingService.cs - streams content from Azure OpenAI using Microsoft.Extensions.AI; using Azure.AI.OpenAI; using Azure.Identity; public class AiStreamingService { private readonly IChatClient _chatClient; public AiStreamingService(IConfiguration config) { var endpoint = new Uri(config["AZURE_OPENAI_ENDPOINT"]!); var deployment = config["AZURE_OPENAI_DEPLOYMENT"]!; // e.g., "gpt-4o-mini" var azureClient = new AzureOpenAIClient(endpoint, new DefaultAzureCredential()); _chatClient = azureClient.GetChatClient(deployment).AsIChatClient(); } public async IAsyncEnumerable<string> StreamReplyAsync(string userMessage) { var messages = new List<ChatMessage> { ChatMessage.CreateSystemMessage("You are a helpful assistant."), ChatMessage.CreateUserMessage(userMessage) }; await foreach (var update in _chatClient.CompleteChatStreamingAsync(messages)) { // Only text parts; ignore tool calls/annotations var chunk = string.Join("", update.Content .Where(p => p.Kind == ChatMessageContentPartKind.Text) .Select(p => ((TextContent)p).Text)); if (!string.IsNullOrEmpty(chunk)) yield return chunk; } } } Modern .NET AI extensions (Microsoft.Extensions.AI) expose a unified streaming pattern via CompleteChatStreamingAsync. ChatHub.cs - pushes partials to the caller using Microsoft.AspNetCore.SignalR; public class ChatHub : Hub { private readonly AiStreamingService _ai; public ChatHub(AiStreamingService ai) => _ai = ai; // Client calls: connection.invoke("AskAi", prompt) public async Task AskAi(string prompt) { var messageId = Guid.NewGuid().ToString("N"); await Clients.Caller.SendAsync("typing", messageId, true); await foreach (var partial in _ai.StreamReplyAsync(prompt)) { await Clients.Caller.SendAsync("partial", messageId, partial); } await Clients.Caller.SendAsync("typing", messageId, false); await Clients.Caller.SendAsync("completed", messageId); } } 2) Frontend: Angular client with microsoftâ/signalr Install the SignalR client npm i microsoft/signalr Create a SignalR service (Angular) // src/app/services/ai-stream.service.ts import { Injectable } from '@angular/core'; import * as signalR from '@microsoft/signalr'; import { BehaviorSubject, Observable } from 'rxjs'; @Injectable({ providedIn: 'root' }) export class AiStreamService { private connection?: signalR.HubConnection; private typing$ = new BehaviorSubject<boolean>(false); private partial$ = new BehaviorSubject<string>(''); private completed$ = new BehaviorSubject<boolean>(false); get typing(): Observable<boolean> { return this.typing$.asObservable(); } get partial(): Observable<string> { return this.partial$.asObservable(); } get completed(): Observable<boolean> { return this.completed$.asObservable(); } async start(): Promise<void> { this.connection = new signalR.HubConnectionBuilder() .withUrl('/chat') // same origin; use absolute URL if CORS .withAutomaticReconnect() .configureLogging(signalR.LogLevel.Information) .build(); this.connection.on('typing', (_id: string, on: boolean) => this.typing$.next(on)); this.connection.on('partial', (_id: string, text: string) => { // Append incremental content this.partial$.next((this.partial$.value || '') + text); }); this.connection.on('completed', (_id: string) => this.completed$.next(true)); await this.connection.start(); } async ask(prompt: string): Promise<void> { // Reset state per request this.partial$.next(''); this.completed$.next(false); await this.connection?.invoke('AskAi', prompt); } } Angular component // src/app/components/ai-chat/ai-chat.component.ts import { Component, OnInit } from '@angular/core'; import { AiStreamService } from '../../services/ai-stream.service'; @Component({ selector: 'app-ai-chat', templateUrl: './ai-chat.component.html', styleUrls: ['./ai-chat.component.css'] }) export class AiChatComponent implements OnInit { prompt = ''; output = ''; typing = false; done = false; constructor(private ai: AiStreamService) {} async ngOnInit() { await this.ai.start(); this.ai.typing.subscribe(on => this.typing = on); this.ai.partial.subscribe(text => this.output = text); this.ai.completed.subscribe(done => this.done = done); } async send() { this.output = ''; this.done = false; await this.ai.ask(this.prompt); } } HTML Template <!-- src/app/components/ai-chat/ai-chat.component.html --> <div class="chat"> <div class="prompt"> <input [(ngModel)]="prompt" placeholder="Ask me anythingâŚ" /> <button (click)="send()">Send</button> </div> <div class="response"> <pre>{{ output }}</pre> <div class="typing" *ngIf="typing">Assistant is typingâŚ</div> <div class="done" *ngIf="done">â Completed</div> </div> </div> Streaming modes, content filters, and UX Azure OpenAI streaming interacts with content filtering in two ways: Default streaming: The service buffers output into content chunks and runs content filters before each chunk is emitted; you still stream, but not necessarily tokenâbyâtoken. Asynchronous Filter (optional): The service returns tokenâlevel updates immediately and runs filters asynchronously. You get ultraâsmooth streaming but must handle delayed moderation signals (e.g., redaction or halting the stream). Best practices Append partials in small batches clientâside to avoid DOM thrash; finalize formatting on "completed". Log full messages serverâside only after completion to keep histories consistent (mirrors agent frameworks). Security & compliance Auth: Prefer Microsoft Entra ID (DefaultAzureCredential) to avoid key sprawl; use RBAC and Managed Identities where possible. Secrets: Store Azure SignalR connection strings in Key Vault and rotate periodically; never hardcode. CORS & crossâdomain: When hosting frontend and hub on different origins, configure CORS and use absolute URLs in withUrl(...). Connection management & scaling tips Persistent connection load: SignalR consumes TCP resources; separate heavy realâtime workloads or use Azure SignalR to protect other apps. Sticky sessions (selfâhosted): Required in most multiâserver scenarios unless WebSocketsâonly + SkipNegotiation applies; Azure SignalR removes this requirement. Learn more AIâPowered Group Chat sample (ASP.NET Core): Azure OpenAI .NET client (auth & streaming): SignalR JavaScript ClientAI Repo of the Week: Generative AI for Beginners with JavaScript
Introduction Ready to explore the fascinating world of Generative AI using your JavaScript skills? This weekâs featured repository, Generative AI for Beginners with JavaScript, is your launchpad into the future of application development. Whether you're just starting out or looking to expand your AI toolbox, this open-source GitHub resource offers a rich, hands-on journey. It includes interactive lessons, quizzes, and even time-travel storytelling featuring historical legends like Leonardo da Vinci and Ada Lovelace. Each chapter combines narrative-driven learning with practical exercises, helping you understand foundational AI concepts and apply them directly in code. Itâs immersive, educational, and genuinely fun. What You'll Learn 1. đ§ Foundations of Generative AI and LLMs Start with the basics: What is generative AI? How do large language models (LLMs) work? This chapter lays the groundwork for how these technologies are transforming JavaScript development. 2. đ Build Your First AI-Powered App Walk through setting up your environment and creating your first AI app. Learn how to configure prompts and unlock the potential of AI in your own projects. 3. đŻ Prompt Engineering Essentials Get hands-on with prompt engineering techniques that shape how AI models respond. Explore strategies for crafting prompts that are clear, targeted, and effective. 4. đŚ Structured Output with JSON Learn how to guide the model to return structured data formats like JSONâcritical for integrating AI into real-world applications. 5. đ Retrieval-Augmented Generation (RAG) Go beyond static prompts by combining LLMs with external data sources. Discover how RAG lets your app pull in live, contextual information for more intelligent results. 6. đ ď¸ Function Calling and Tool Use Give your LLM new powers! Learn how to connect your own functions and tools to your app, enabling more dynamic and actionable AI interactions. 7. đ Model Context Protocol (MCP) Dive into MCP, a new standard for organizing prompts, tools, and resources. Learn how it simplifies AI app development and fosters consistency across projects. 8. âď¸ Enhancing MCP Clients with LLMs Build on what youâve learned by integrating LLMs directly into your MCP clients. See how to make them smarter, faster, and more helpful. ⨠More chapters coming soonâwatch the repo for updates! Companion App: Interact with History Experience the power of generative AI in action through the companion web appâwhere you can chat with historical figures and witness how JavaScript brings AI to life in real time. Conclusion Generative AI for Beginners with JavaScript is more than a courseâitâs an adventure into how storytelling, coding, and AI can come together to create something fun and educational. Whether you're here to upskill, experiment, or build the next big thing, this repository is your all-in-one resource to get started with confidence. đ Jump into the future of developmentâcheck out the repo and start building with AI today!Add speech input & output to your app with the free browser APIs
One of the amazing benefits of modern machine learning is that computers can reliably turn text into speech, or transcribe speech into text, across multiple languages and accents. We can then use those capabilities to make our web apps more accessible for anyone who has a situational, temporary, or chronic issue that makes typing difficult. That describes so many people - for example, a parent holding a squirmy toddler in their hands, an athlete with a broken arm, or an individual with Parkinson's disease. There are two approaches we can use to add speech capabilities to our apps: Use the built-in browser APIs: the SpeechRecognition API and SpeechSynthesis API. Use a cloud-based service, like the Azure Speech API. Which one to use? The great thing about the browser APIs is that they're free and available in most modern browsers and operating systems. The drawback of the APIs is that they're often not as powerful and flexible as cloud-based services, and the speech output often sounds more robotic. There are also a few niche browser/OS combos where the built-in APIs don't work. That's why we decided to add both options to our most popular RAG chat solution, to give developers the option to decide for themselves. However, in this post, I'm going to show you how to add speech capabilities using the free built-in browser APIs, since free APIs are often easier to get started with and it's important to do what we can to improve the accessibility of our apps. The GIF below shows the end result, a chat app with both speech input and output buttons: All of the code described in this post is part of openai-chat-vision-quickstart, so you can grab the full code yourself after seeing how it works. Speech input with SpeechRecognition API To make it easier to add a speech input button to any app, I'm wrapping the functionality inside a custom HTML element, SpeechInputButton . First I construct the speech input button element with an instance of the SpeechRecognition API, making sure to use the browser's preferred language if any are set: class SpeechInputButton extends HTMLElement { constructor() { super(); this.isRecording = false; const SpeechRecognition = window.SpeechRecognition || window.webkitSpeechRecognition; if (!SpeechRecognition) { this.dispatchEvent( new CustomEvent("speecherror", { detail: { error: "SpeechRecognition not supported" }, }) ); return; } this.speechRecognition = new SpeechRecognition(); this.speechRecognition.lang = navigator.language || navigator.userLanguage; this.speechRecognition.interimResults = false; this.speechRecognition.continuous = true; this.speechRecognition.maxAlternatives = 1; } Then I define the connectedCallback() method that will be called whenever this custom element has been added to the DOM. When that happens, I define the inner HTML to render a button and attach event listeners for both mouse and keyboard events. Since we want this to be fully accessible, keyboard support is important. connectedCallback() { this.innerHTML = ` <button class="btn btn-outline-secondary" type="button" title="Start recording (Shift + Space)"> <i class="bi bi-mic"></i> </button>`; this.recordButton = this.querySelector('button'); this.recordButton.addEventListener('click', () => this.toggleRecording()); document.addEventListener('keydown', this.handleKeydown.bind(this)); } handleKeydown(event) { if (event.key === 'Escape') { this.abortRecording(); } else if (event.key === ' ' && event.shiftKey) { // Shift + Space event.preventDefault(); this.toggleRecording(); } } toggleRecording() { if (this.isRecording) { this.stopRecording(); } else { this.startRecording(); } } The majority of the code is in the startRecording function. It sets up a listener for the "result" event from the SpeechRecognition instance, which contains the transcribed text. It also sets up a listener for the "end" event, which is triggered either automatically after a few seconds of silence (in some browsers) or when the user ends the recording by clicking the button. Finally, it sets up a listener for any "error" events. Once all listeners are ready, it calls start() on the SpeechRecognition instance and styles the button to be in an active state. startRecording() { if (this.speechRecognition == null) { this.dispatchEvent( new CustomEvent("speech-input-error", { detail: { error: "SpeechRecognition not supported" }, }) ); } this.speechRecognition.onresult = (event) => { let input = ""; for (const result of event.results) { input += result[0].transcript; } this.dispatchEvent( new CustomEvent("speech-input-result", { detail: { transcript: input }, }) ); }; this.speechRecognition.onend = () => { this.isRecording = false; this.renderButtonOff(); this.dispatchEvent(new Event("speech-input-end")); }; this.speechRecognition.onerror = (event) => { if (this.speechRecognition) { this.speechRecognition.stop(); if (event.error == "no-speech") { this.dispatchEvent( new CustomEvent("speech-input-error", { detail: {error: "No speech was detected. Please check your system audio settings and try again."}, })); } else if (event.error == "language-not-supported") { this.dispatchEvent( new CustomEvent("speech-input-error", { detail: {error: "The selected language is not supported. Please try a different language.", }})); } else if (event.error != "aborted") { this.dispatchEvent( new CustomEvent("speech-input-error", { detail: {error: "An error occurred while recording. Please try again: " + event.error}, })); } } }; this.speechRecognition.start(); this.isRecording = true; this.renderButtonOn(); } If the user stops the recording using the keyboard shortcut or button click, we call stop() on the SpeechRecognition instance. At that point, anything the user had said will be transcribed and become available via the "result" event. stopRecording() { if (this.speechRecognition) { this.speechRecognition.stop(); } } Alternatively, if the user presses the Escape keyboard shortcut, we instead call abort() on the SpeechRecognition instance, which stops the recording and does not send any previously untranscribed speech over. abortRecording() { if (this.speechRecognition) { this.speechRecognition.abort(); } } Once the custom HTML element is fully defined, we register it with the desired tag name, speech-input-button : customElements.define("speech-input-button", SpeechInputButton); To use the custom speech-input-button element in a chat application, we add it to the HTML for the chat form: <speech-input-button></speech-input-button> <input id="message" name="message" type="text" rows="1"></input> Then we attach an event listener for the custom events dispatched by the element, and we update the input text field with the transcribed text: const speechInputButton = document.querySelector("speech-input-button"); speechInputButton.addEventListener("speech-input-result", (event) => { messageInput.value += " " + event.detail.transcript.trim(); messageInput.focus(); }); You can see the full custom HTML element code in speech-input.js and the usage in index.html. There's also a fun pulsing animation for the button's active state in styles.css. Speech output with SpeechSynthesis API Once again, to make it easier to add a speech output button to any app, I'm wrapping the functionality inside a custom HTML element, SpeechOutputButton . When defining the custom element, we specify an observed attribute named "text", to store whatever text should be turned into speech when the button is clicked. class SpeechOutputButton extends HTMLElement { static observedAttributes = ["text"]; In the constructor, we check to make sure the SpeechSynthesis API is supported, and remember the browser's preferred language for later use. constructor() { super(); this.isPlaying = false; const SpeechSynthesis = window.speechSynthesis || window.webkitSpeechSynthesis; if (!SpeechSynthesis) { this.dispatchEvent( new CustomEvent("speech-output-error", { detail: { error: "SpeechSynthesis not supported" } })); return; } this.synth = SpeechSynthesis; this.lngCode = navigator.language || navigator.userLanguage; } When the custom element is added to the DOM, I define the inner HTML to render a button and attach mouse and keyboard event listeners: connectedCallback() { this.innerHTML = ` <button class="btn btn-outline-secondary" type="button"> <i class="bi bi-volume-up"></i> </button>`; this.speechButton = this.querySelector("button"); this.speechButton.addEventListener("click", () => this.toggleSpeechOutput() ); document.addEventListener('keydown', this.handleKeydown.bind(this)); } The majority of the code is in the toggleSpeechOutput function. If the speech is not yet playing, it creates a new SpeechSynthesisUtterance instance, passes it the "text" attribute, and sets the language and audio properties. It attempts to use a voice that's optimal for the desired language, but falls back to "en-US" if none is found. It attaches event listeners for the start and end events, which will change the button's style to look either active or unactive. Finally, it tells the SpeechSynthesis API to speak the utterance. toggleSpeechOutput() { if (!this.isConnected) { return; } const text = this.getAttribute("text"); if (this.synth != null) { if (this.isPlaying || text === "") { this.stopSpeech(); return; } // Create a new utterance and play it. const utterance = new SpeechSynthesisUtterance(text); utterance.lang = this.lngCode; utterance.volume = 1; utterance.rate = 1; utterance.pitch = 1; let voice = this.synth .getVoices() .filter((voice) => voice.lang === this.lngCode)[0]; if (!voice) { voice = this.synth .getVoices() .filter((voice) => voice.lang === "en-US")[0]; } utterance.voice = voice; if (!utterance) { return; } utterance.onstart = () => { this.isPlaying = true; this.renderButtonOn(); }; utterance.onend = () => { this.isPlaying = false; this.renderButtonOff(); }; this.synth.speak(utterance); } } When the user no longer wants to hear the speech output, indicated either via another press of the button or by pressing the Escape key, we call cancel() from the SpeechSynthesis API. stopSpeech() { if (this.synth) { this.synth.cancel(); this.isPlaying = false; this.renderButtonOff(); } } Once the custom HTML element is fully defined, we register it with the desired tag name, speech-output-button : customElements.define("speech-output-button", SpeechOutputButton); To use this custom speech-output-button element in a chat application, we construct it dynamically each time that we've received a full response from an LLM, and call setAttribute to pass in the text to be spoken: const speechOutput = document.createElement("speech-output-button"); speechOutput.setAttribute("text", answer); messageDiv.appendChild(speechOutput); You can see the full custom HTML element code in speech-output.js and the usage in index.html. This button also uses the same pulsing animation for the active state, defined in styles.css. Acknowledgments I want to give a huge shout-out to John Aziz for his amazing work adding speech input and output to the azure-search-openai-demo, as that was the basis for the code I shared in this blog post.1.2KViews2likes0CommentsHow to Integrate Playwright MCP for AI-Driven Test Automation
Test automation has come a long way, from scripted flows to self-healing and now AI-driven testing. With the introduction of Model Context Protocol (MCP), Playwright can now interact with AI models and external tools to make smarter testing decisions. This guide walks you through integrating MCP with Playwright in VSCode, starting from the basics, enabling you to build smarter, adaptive tests today. What Is Playwright MCP? Playwright: An open-source framework for web testing and automation. It supports multiple browsers (Chromium, Firefox, and WebKit) and offers robust features like auto-wait, capturing screenshots, along with some great tooling like Codegen, Trace Viewer. MCP (Model Context Protocol): A protocol that enables external tools to communicate with AI models or services in a structured, secure way. By combining Playwright with MCP, you unlock: AI-assisted test generation. Dynamic test data. Smarter debugging and adaptive workflows. Why Integrate MCP with Playwright? AI-powered test generation: Reduce manual scripting. Dynamic context awareness: Tests adapt to real-time data. Improved debugging: AI can suggest fixes for failing tests. Smarter locator selection: AI helps pick stable, reliable selectors to reduce flaky tests. Natural language instructions: Write or trigger tests using plain English prompts. Getting Started in VS Code Prerequisites Node.js Download: nodejs.org Minimum version: v18.0.0 or higher (recommended: latest LTS) Check version: node --version Playwright Install Playwright: npm install @playwright/test Step 1: Create Project Folder mkdir playwrightMCP-demo cd playwrightMCP-demo Step 2: Initialize Project npm init playwright@latest Step 3: Install MCP Server for VS Code Navigate to GitHub - microsoft/playwright-mcp: Playwright MCP server and click install server for VS Code Search for 'MCP: Open user configuration' (type â>mcpâ in the search box) You will see a file mcp.json is created in your user -> app data folder, which is having the server details. { "servers": { "playwright": { "command": "npx", "args": [ "@playwright/mcp@latest" ], "type": "stdio" } }, "inputs": [] } Alternatively, install an MCP server directly GitHub MCP server registry using the Extensions view in VS Code. From GitHub MCP server registry Verify installation: Open Copilot Chat â select Agent Mode â click Configure Tools â confirm microsoft/playwright-mcp appears in the list. Step 4: Create a Simple Test Using MCP Once your project and MCP setup are ready in VS Code, you can create a simple test that demonstrates MCPâs capabilities. MCP can help in multiple scenarios, below is the example for Test Generation using AI: Scenario: AI-Assisted Test Generation- Use natural language prompts to generate Playwright tests automatically. Test Scenario - Validate that a user can switch the Playwright documentation language dropdown to Python, search for âFrames,â and navigate to the Frames documentation page. Confirm that the page heading correctly displays âFrames.â Sample Prompt to Use in VS Code (Copilot Agent Mode):Create a Playwright automated test in JavaScript that verifies navigation to the 'Frames' documentation page following below steps and be more specific about locators to avoid strict mode violation error Navigate to : Playwright documentation select âPythonâ from the dropdown options, labelled âNode.jsâ Type the keyword âFramesâ into the search box. Click the search result for the Frames documentation page Verify that the page header reads âFramesâ. Log success or provide a failure message with details. Copilot will generate the test automatically in your tests folder Step 5: Run Test npx playwright test Conclusion Integrating Playwright with MCP in VS Code helps you build smarter, adaptive tests without adding complexity. Start small, follow best practices, and scale as you grow. Note - Installation steps may vary depending on your environment. Refer to MCP Registry ¡ GitHub for the latest instructions.đ¨Introducing the JS AI Build-a-thon đ¨
Weâre entering a future where AI-first and agentic developer experiences will shape how we build â and you donât want to be left behind. This isnât your average hackathon. Itâs a hands-on, quest-driven learning experience designed for developers, packed with: Interactive quests that guide you step by step â from your first prototype to production-ready apps Community-powered support via our dedicated Discord and local, community-led study jams Showcase moments to share your journey, get inspired, and celebrate what you build Whether you're just starting your AI journey or looking to sharpen your skills with frameworks like LangChain.js, tools like the Azure AI Foundry and AI Toolkit Extensions, or diving deeper into agentic app design â this is your moment to start building.Microsoft AI Agents Hack April 8-30th 2025
Build, Innovate, and #Hacktogether Learn from 20+ expert-led sessions streamed live on YouTube, covering top frameworks like Semantic Kernel, Autogen, the new Azure AI Agents SDK and the Microsoft 365 Agents SDK. Get hands-on experience, unleash your creativity, and build powerful AI agentsâthen submit your hack for a chance to win amazing prizes! Key Dates Expert sessions: April 8th 2025 â April 30th 2025 Hack submission deadline: April 30th 2025, 11:59 PM PST Don't miss out â join us and start building the future of AI! Registration Register now! That form will register you for the hackathon. Afterwards, browse through the live stream schedule below and register for the sessions you're interested in. Once you're registered, introduce yourself and look for teammates! Project Submission Once your hack is ready, follow the submission process. Prizes and Categories Projects will be evaluated by a panel of judges, including Microsoft engineers, product managers, and developer advocates. Judging criteria will include innovation, impact, technical usability, and alignment with corresponding hackathon category. Each winning team in the categories below will receive a prize. Best Overall Agent - $20,000 Best Agent in Python - $5,000 Best Agent in C# - $5,000 Best Agent in Java - $5,000 Best Agent in JavaScript/TypeScript - $5,000 Best Copilot Agent (using Microsoft Copilot Studio or Microsoft 365 Agents SDK) - $5,000 Best Azure AI Agent Service Usage - $5,000 Each team can only win in one category. All participants who submit a project will receive a digital badge. Stream Schedule The series starts with a kick-off for all developers, and then dives into specific tracks for Python, Java, C#, and JavaScript developers. The Copilots track will focus on building intelligent copilots with Microsoft 365 and Copilot Studio. English Week 1: April 8th-11th Day/Time Topic Track 4/8 09:00 AM PT AI Agents Hackathon Kickoff All 4/9 09:00 AM PT Build your code-first app with Azure AI Agent Service Python 4/9 12:00 PM PT AI Agents for Java using Azure AI Foundry Java 4/9 03:00 PM PT Build your code-first app with Azure AI Agent Service Python 4/10 04:00 AM PT Building Secure and Intelligent Copilots with Microsoft 365 Copilots 4/10 09:00 AM PT Overview of Microsoft 365 Copilot Extensibility Copilots 4/10 12:00 PM PT Transforming business processes with multi-agent AI using Semantic Kernel Python 4/10 03:00 PM PT Build your code-first app with Azure AI Agent Service (.NET) C# Week 2: April 14th-18th Day/Time Topic Track 4/15 07:00 AM PT Your first AI Agent in JS with Azure AI Agent Service JS 4/15 09:00 AM PT Building Agentic Applications with AutoGen v0.4 Python 4/15 12:00 PM PT AI Agents + .NET Aspire C# 4/15 03:00 PM PT Prototyping AI Agents with GitHub Models Python 4/16 04:00 AM PT Multi-agent AI apps with Semantic Kernel and Azure Cosmos DB C# 4/16 06:00 AM PT Building declarative agents with Microsoft Copilot Studio & Teams Toolkit Copilots 4/16 09:00 AM PT Building agents with an army of models from the Azure AI model catalog Python 4/16 12:00 PM PT Multi-Agent API with LangGraph and Azure Cosmos DB Python 4/16 03:00 PM PT Mastering Agentic RAG Python 4/17 06:00 AM PT Build your own agent with OpenAI, .NET, and Copilot Studio C# 4/17 09:00 AM PT Building smarter Python AI agents with code interpreters Python 4/17 12:00 PM PT Building Java AI Agents using LangChain4j and Dynamic Sessions Java 4/17 03:00 PM PT Agentic Voice Mode Unplugged Python Week 3: April 21st-25th Day/Time Topic Track 4/21 12:00 PM PT Knowledge-augmented agents with LlamaIndex.TS JS 4/22 06:00 AM PT Building a AI Agent with Prompty and Azure AI Foundry Python 4/22 09:00 AM PT Real-time Multi-Agent LLM solutions with SignalR, gRPC, and HTTP based on Semantic Kernel Python 4/22 10:30 AM PT Learn Live: Fundamentals of AI agents on Azure - 4/22 12:00 PM PT Demystifying Agents: Building an AI Agent from Scratch on Your Own Data using Azure SQL C# 4/22 03:00 PM PT VoiceRAG: talk to your data Python 4/14 06:00 AM PT Prompting is the New Scripting: Meet GenAIScript JS 4/23 09:00 AM PT Building Multi-Agent Apps on top of Azure PostgreSQL Python 4/23 12:00 PM PT Agentic RAG with reflection Python 4/23 03:00 PM PT Multi-source data patterns for modern RAG apps C# 4/24 09:00 AM PT Extending AI Agents with Azure Functions Python, C# 4/24 12:00 PM PT Build real time voice agents with Azure Communication Services Python 4/24 03:00 PM PT Bringing robots to life: Real-time interactive experiences with Azure OpenAI GPT-4o Python Week 4: April 28th-30th Day/Time Topic Track 4/29, 01:00 PM UTC / 06:00 AM PT Irresponsible AI Agents Java 4/29, 04:00 PM UTC / 09:00 AM PT Securing AI agents on Azure Python Spanish / EspaĂąol See all our Spanish sessions on the Spanish landing page. Consulta todas nuestras sesiones en espaĂąol en la pĂĄgina de inicio en espaĂąol. Portuguese / PortuguĂŞs See our Portuguese sessions on the Portuguese landing page. Veja nossas sessĂľes em portuguĂŞs na pĂĄgina de entrada em portuguĂŞs. Chinese / çŽä˝ĺ See our Chinese sessions on the Chinese landing page. 诡ćĽçć䝏çä¸ć诞ç¨ĺ¨ä¸ćçťĺ˝éĄľé˘. Office Hours For additional help with your hacks, you can drop by Office Hours in our AI Discord channel. Here are the Office Hours scheduled so far: Day/Time Topic/Hosts Every Thursday, 12:30 PM PT Python + AI (English) Every Monday, 03:00 PM PT Python + AI (Spanish) Learning Resources Access resources here! Join TheSource EHub to explore top picks including trainings, livestreams, repositories, technical guides, blogs, downloads, certifications, and more, all updated monthly. The AI Agent section offers essential resources for creating AI Agents, while other sections provide insights into AI, development tools, and programming languages. You can also post questions in our discussions forum, or chat with attendees in the Discord channel.