get started
94 TopicsAnnouncing Public Preview: AI Toolkit for GitHub Copilot Prompt-First Agent Development
This week at GitHub Universe, we’re announcing the Public Preview of the GitHub Copilot prompt-first agent development in the AI Toolkit for Visual Studio Code. With this release, building powerful AI agents is now simpler and faster - no need to wrestle with complex frameworks or orchestrators. Just start with natural language prompts and let GitHub Copilot guide you from concept to working agent code. Accelerate Agent Development in VS Code The AI Toolkit embeds agent development workflows directly into Visual Studio Code and GitHub Copilot, enabling you to transform ideas into production-ready agents within minutes. This unified experience empowers developers and product teams to: Select the best model for your agent scenario Build and orchestrate agents using Microsoft Agent Framework Trace agent behaviors Evaluate agent response quality Select the best model for your scenario Models are the foundation for building powerful agents. Using the AI Toolkit, you can already explore and experiment with a wide range of local and remote models. Copilot now recommends models tailored to your agent’s needs, helping you make informed choices quickly. Build and orchestrate agents Whether you’re creating a single agent or designing a multi-agent workflow, Copilot leverages the latest Microsoft Agent Framework to generate robust agent code. You can initiate agent creation with simple prompts and visualize workflows for greater clarity and control. Create a single agent using Copilot Create a multi-agent workflow using Copilot and visualize workflow execution Trace agent behaviors As agents become more sophisticated, understanding their actions is crucial. The AI Toolkit enables tracing via Copilot, collecting local traces and displaying detailed agent calls, all within VS Code. Evaluate agent response quality Copilot guides you through structured evaluation, recommending metrics and generating test datasets. Integrate evaluations into your CI/CD pipeline for continuous quality assurance and confident deployments. Get started and share feedback This release marks a significant step toward making AI agent development easier and more accessible in Visual Studio Code. Try out the AI Toolkit for Visual Studio Code, share your thoughts, and file issues and suggest features on our GitHub repo. Thank you for being a part of this journey with us!Introducing the Microsoft Agent Framework
Introducing the Microsoft Agent Framework: A Unified Foundation for AI Agents and Workflows The landscape of AI development is evolving rapidly, and Microsoft is at the forefront with the release of the Microsoft Agent Framework an open-source SDK designed to empower developers to build intelligent, multi-agent systems with ease and precision. Whether you're working in .NET or Python, this framework offers a unified, extensible foundation that merges the best of Semantic Kernel and AutoGen, while introducing powerful new capabilities for agent orchestration and workflow design. Introducing Microsoft Agent Framework: The Open-Source Engine for Agentic AI Apps | Azure AI Foundry Blog Introducing Microsoft Agent Framework | Microsoft Azure Blog Why Another Agent Framework? Both Semantic Kernel and AutoGen have pioneered agentic development, Semantic Kernel with its enterprise-grade features and AutoGen with its research-driven abstractions. The Microsoft Agent Framework is the next generation of both, built by the same teams to unify their strengths: AutoGen’s simplicity in multi-agent orchestration. Semantic Kernel’s robustness in thread-based state management, telemetry, and type safety. New capabilities like graph-based workflows, checkpointing, and human-in-the-loop support This convergence means developers no longer have to choose between experimentation and production. The Agent Framework is designed to scale from single-agent prototypes to complex, enterprise-ready systems Core Capabilities AI Agents AI agents are autonomous entities powered by LLMs that can process user inputs, make decisions, call tools and MCP servers, and generate responses. They support providers like Azure OpenAI, OpenAI, and Azure AI, and can be enhanced with: Agent threads for state management. Context providers for memory. Middleware for action interception. MCP clients for tool integration Use cases include customer support, education, code generation, research assistance, and more—especially where tasks are dynamic and underspecified. Workflows Workflows are graph-based orchestrations that connect multiple agents and functions to perform complex, multi-step tasks. They support: Type-based routing Conditional logic Checkpointing Human-in-the-loop interactions Multi-agent orchestration patterns (sequential, concurrent, hand-off, Magentic) Workflows are ideal for structured, long-running processes that require reliability and modularity. Developer Experience The Agent Framework is designed to be intuitive and powerful: Installation: Python: pip install agent-framework .NET: dotnet add package Microsoft.Agents.AI Integration: Works with Foundry SDK, MCP SDK, A2A SDK, and M365 Copilot Agents Samples and Manifests: Explore declarative agent manifests and code samples Learning Resources: Microsoft Learn modules AI Agents for Beginners AI Show demos Azure AI Foundry Discord community Migration and Compatibility If you're currently using Semantic Kernel or AutoGen, migration guides are available to help you transition smoothly. The framework is designed to be backward-compatible where possible, and future updates will continue to support community contributions via the GitHub repository. Important Considerations The Agent Framework is in public preview. Feedback and issues are welcome on the GitHub repository. When integrating with third-party servers or agents, review data sharing practices and compliance boundaries carefully. The Microsoft Agent Framework marks a pivotal moment in AI development, bringing together research innovation and enterprise readiness into a single, open-source foundation. Whether you're building your first agent or orchestrating a fleet of them, this framework gives you the tools to do it safely, scalably, and intelligently. Ready to get started? Download the SDK, explore the documentation, and join the community shaping the future of AI agents.From Cloud to Chip: Building Smarter AI at the Edge with Windows AI PCs
As AI engineers, we’ve spent years optimizing models for the cloud, scaling inference, wrangling latency, and chasing compute across clusters. But the frontier is shifting. With the rise of Windows AI PCs and powerful local accelerators, the edge is no longer a constraint it’s now a canvas. Whether you're deploying vision models to industrial cameras, optimizing speech interfaces for offline assistants, or building privacy-preserving apps for healthcare, Edge AI is where real-world intelligence meets real-time performance. Why Edge AI, Why Now? Edge AI isn’t just about running models locally, it’s about rethinking the entire lifecycle: - Latency: Decisions in milliseconds, not round-trips to the cloud. - Privacy: Sensitive data stays on-device, enabling HIPAA/GDPR compliance. - Resilience: Offline-first apps that don’t break when the network does. - Cost: Reduced cloud compute and bandwidth overhead. With Windows AI PCs powered by Intel and Qualcomm NPUs and tools like ONNX Runtime, DirectML, and Olive, developers can now optimize and deploy models with unprecedented efficiency. What You’ll Learn in Edge AI for Beginners The Edge AI for Beginners curriculum is a hands-on, open-source guide designed for engineers ready to move from theory to deployment. Multi-Language Support This content is available in over 48 languages, so you can read and study in your native language. What You'll Master This course takes you from fundamental concepts to production-ready implementations, covering: Small Language Models (SLMs) optimized for edge deployment Hardware-aware optimization across diverse platforms Real-time inference with privacy-preserving capabilities Production deployment strategies for enterprise applications Why EdgeAI Matters Edge AI represents a paradigm shift that addresses critical modern challenges: Privacy & Security: Process sensitive data locally without cloud exposure Real-time Performance: Eliminate network latency for time-critical applications Cost Efficiency: Reduce bandwidth and cloud computing expenses Resilient Operations: Maintain functionality during network outages Regulatory Compliance: Meet data sovereignty requirements Edge AI Edge AI refers to running AI algorithms and language models locally on hardware, close to where data is generated without relying on cloud resources for inference. It reduces latency, enhances privacy, and enables real-time decision-making. Core Principles: On-device inference: AI models run on edge devices (phones, routers, microcontrollers, industrial PCs) Offline capability: Functions without persistent internet connectivity Low latency: Immediate responses suited for real-time systems Data sovereignty: Keeps sensitive data local, improving security and compliance Small Language Models (SLMs) SLMs like Phi-4, Mistral-7B, Qwen and Gemma are optimized versions of larger LLMs, trained or distilled for: Reduced memory footprint: Efficient use of limited edge device memory Lower compute demand: Optimized for CPU and edge GPU performance Faster startup times: Quick initialization for responsive applications They unlock powerful NLP capabilities while meeting the constraints of: Embedded systems: IoT devices and industrial controllers Mobile devices: Smartphones and tablets with offline capabilities IoT Devices: Sensors and smart devices with limited resources Edge servers: Local processing units with limited GPU resources Personal Computers: Desktop and laptop deployment scenarios Course Modules & Navigation Course duration. 10 hours of content Module Topic Focus Area Key Content Level Duration 📖 00 Introduction to EdgeAI Foundation & Context EdgeAI Overview • Industry Applications • SLM Introduction • Learning Objectives Beginner 1-2 hrs 📚 01 EdgeAI Fundamentals Cloud vs Edge AI comparison EdgeAI Fundamentals • Real World Case Studies • Implementation Guide • Edge Deployment Beginner 3-4 hrs 🧠 02 SLM Model Foundations Model families & architecture Phi Family • Qwen Family • Gemma Family • BitNET • μModel • Phi-Silica Beginner 4-5 hrs 🚀 03 SLM Deployment Practice Local & cloud deployment Advanced Learning • Local Environment • Cloud Deployment Intermediate 4-5 hrs ⚙️ 04 Model Optimization Toolkit Cross-platform optimization Introduction • Llama.cpp • Microsoft Olive • OpenVINO • Apple MLX • Workflow Synthesis Intermediate 5-6 hrs 🔧 05 SLMOps Production Production operations SLMOps Introduction • Model Distillation • Fine-tuning • Production Deployment Advanced 5-6 hrs 🤖 06 AI Agents & Function Calling Agent frameworks & MCP Agent Introduction • Function Calling • Model Context Protocol Advanced 4-5 hrs 💻 07 Platform Implementation Cross-platform samples AI Toolkit • Foundry Local • Windows Development Advanced 3-4 hrs 🏭 08 Foundry Local Toolkit Production-ready samples Sample applications (see details below) Expert 8-10 hrs Each module includes Jupyter notebooks, code samples, and deployment walkthroughs, perfect for engineers who learn by doing. Developer Highlights - 🔧 Olive: Microsoft's optimization toolchain for quantization, pruning, and acceleration. - 🧩 ONNX Runtime: Cross-platform inference engine with support for CPU, GPU, and NPU. - 🎮 DirectML: GPU-accelerated ML API for Windows, ideal for gaming and real-time apps. - 🖥️ Windows AI PCs: Devices with built-in NPUs for low-power, high-performance inference. Local AI: Beyond the Edge Local AI isn’t just about inference, it’s about autonomy. Imagine agents that: - Learn from local context - Adapt to user behavior - Respect privacy by design With tools like Agent Framework, Azure AI Foundry and Windows Copilot Studio, and Foundry Local developers can orchestrate local agents that blend LLMs, sensors, and user preferences, all without cloud dependency. Try It Yourself Ready to get started? Clone the Edge AI for Beginners GitHub repo, run the notebooks, and deploy your first model to a Windows AI PC or IoT devices Whether you're building smart kiosks, offline assistants, or industrial monitors, this curriculum gives you the scaffolding to go from prototype to production.Essential Microsoft Resources for MVPs & the Tech Community from the AI Tour
Unlock the power of Microsoft AI with redeliverable technical presentations, hands-on workshops, and open-source curriculum from the Microsoft AI Tour! Whether you’re a Microsoft MVP, Developer, or IT Professional, these expertly crafted resources empower you to teach, train, and lead AI adoption in your community. Explore top breakout sessions covering GitHub Copilot, Azure AI, Generative AI, and security best practices—designed to simplify AI integration and accelerate digital transformation. Dive into interactive workshops that provide real-world applications of AI technologies. Take it a step further with Microsoft’s Open-Source AI Curriculum, offering beginner-friendly courses on AI, Machine Learning, Data Science, Cybersecurity, and GitHub Copilot—perfect for upskilling teams and fostering innovation. Don’t just learn—lead. Access these resources, host impactful training sessions, and drive AI adoption in your organization. Start sharing today! Explore now: Microsoft AI Tour Resources.How to Master GitHub Copilot: Build, Prompt, Deploy Smarter
Mastering GitHub Copilot: Build, Prompt, Deploy Smarter is a free, hands-on workshop designed to help developers go beyond autocomplete and unlock the true power of AI-assisted coding. Instead of toy examples, this course walks you through real-world software engineering challenges: messy codebases, multi-language projects, cloud deployments, and legacy system upgrades. You’ll learn practical skills like prompt engineering, advanced Copilot features, and AI pair programming techniques that make you faster, sharper, and more creative. Whether you’re a junior developer or a seasoned architect, mastering GitHub Copilot will help you: Reduce cognitive load and focus on system design Accelerate onboarding for new engineers Write cleaner, more consistent code Automate repetitive tasks to free up time for innovation AI coding tools like GitHub Copilot are no longer optional—they’re essential. This workshop gives you the skills to collaborate with Copilot effectively and stay competitive in the age of AI-powered development.1.3KViews0likes0CommentsJoin Us for a Technical Deep Dive and Q&A on Foundry Local - LLMs on device
Join us for an Ask Me Anything with the Foundry Local team on October 14th, 2025! Discover how Foundry Local is redefining edge AI with powerful features like on-device inference, enabling you to run models directly on your hardware, cutting costs and keeping your data secure. Whether you're customizing models to fit unique use cases or integrating seamlessly via SDKs, APIs, or CLI, Foundry Local offers scalable pathways to Azure AI Foundry as your needs evolve. It's the perfect solution for environments with limited connectivity, sensitive data requirements, low-latency demands, or early-stage experimentation before cloud deployment. If you're building smarter, leaner, and more private AI workflows, this AMA is your chance to dive deep with the team behind it all. What is Foundry Local? Foundry Local is a set of development tools designed to help you build and evaluate LLM applications on your local machine. It provides a curated collection of production-quality tools, including evaluation and prompt engineering capabilities, that are fully compatible with Azure AI. This allows for a seamless transition of your work from your local environment to the cloud. Don't miss this opportunity to connect with our experts and enhance your understanding of local LLM development. Foundry Local is an on-device AI inference solution offering performance, privacy, customization, and cost advantages. It integrates seamlessly into your existing workflows and applications through an intuitive CLI, SDK, and REST API. Key features On-Device Inference: Run models locally on your own hardware, reducing your costs while keeping all your data on your device. Model Customization: Select from preset models or use your own to meet specific requirements and use cases. Cost Efficiency: Eliminate recurring cloud service costs by using your existing hardware, making AI more accessible. Seamless Integration: Connect with your applications through an SDK, API endpoints, or the CLI, with easy scaling to Azure AI Foundry as your needs grow. How to Join: Register to Join the Azure AI Foundry Discord Community Event 14th Oct 2025 9am Pacific Time UTC−08:00 Unlock Accelerated Local LLM Development Discover how Foundry Local can enhance your development process and explore the possibilities for building robust LLM applications. Whether you're a seasoned AI developer or just getting started, this session is your chance to get hands-on insights into the innovative world of Azure AI Foundry. Event Highlights: An in-depth overview of the Foundry Local CLI and SDK. Interactive demo with step-by-step examples. Best practices for local AI Inference and models Transitioning your local development to cloud solutions or vice-versa Why Attend? Gain expert insights into Foundry Local, and ask questions about using Foundry Local Network with fellow AI professionals and developers in the Azure AI Foundry community. Enhance your AI development skills with practical examples. Stay at the forefront of LLM application development. Speakers Product Manager Foundry Local Maanav Dalal Product Manager |Foundry Local Microsoft Maanav Dalal is a PM on the AI Frameworks team. He's super inquisitive about the ways you use AI in daily life, so be encouraged to strike up a conversation with him about that. LinkedIn ProfileAnnouncing MSGraph Provider Public Preview and the Microsoft Terraform VSCode Extension
We are thrilled to announce two exciting developments in the Microsoft ecosystem for Terraform infrastructure-as-code (IaC) practitioners: the public preview of the Terraform Microsoft Graph (MSGraph) provider and the release of the Microsoft Terraform Visual Studio Code (VSCode) extension. These innovations are designed to streamline your workflow, empower your automation, and make managing Microsoft cloud resources easier than ever. Public Preview: Terraform Microsoft Graph (MSGraph) Provider The Terraform MSGraph provider empowers you to manage Entra APIs like privileged identity management as well as M365 Graph APIs like SharePoint sites from day 0 by leveraging the power and flexibility of HashiCorp Configuration Language (HCL) in Terraform. resource "msgraph_resource" "application" { url = "applications" body = { displayName = "My Application" } response_export_values = { all = "@" app_id = "appId" } } output "app_id" { value = msgraph_resource.application.output.app_id } output "all" { // it will output the whole response value = msgraph_resource.application.output.all } Historically, Terraform users could utilize the `azuread` provider to manage Entra features like users, groups, service principals, and applications. The new `msgraph` provider also supports these features and extends functionality to all beta and v1 Microsoft Graph endpoints. Querying role assignments for a service principal The below example shows how to use the `msgraph` provider to grant app permissions to a service principal: locals { MicrosoftGraphAppId = "00000003-0000-0000-c000-000000000000" # AppRoleAssignment userReadAllAppRoleId = one([for role in data.msgraph_resource.servicePrincipal_msgraph.output.all.value[0].appRoles : role.id if role.value == "User.Read.All"]) userReadWriteRoleId = one([for role in data.msgraph_resource.servicePrincipal_msgraph.output.all.value[0].oauth2PermissionScopes : role.id if role.value == "User.ReadWrite"]) # ServicePrincipal MSGraphServicePrincipalId = data.msgraph_resource.servicePrincipal_msgraph.output.all.value[0].id TestApplicationServicePrincipalId = msgraph_resource.servicePrincipal_application.output.all.id } data "msgraph_resource" "servicePrincipal_msgraph" { url = "servicePrincipals" query_parameters = { "$filter" = ["appId eq '${local.MicrosoftGraphAppId}'"] } response_export_values = { all = "@" } } resource "msgraph_resource" "application" { url = "applications" body = { displayName = "My Application" requiredResourceAccess = [ { resourceAppId = local.MicrosoftGraphAppId resourceAccess = [ { id = local.userReadAllAppRoleId type = "Scope" }, { id = local.userReadWriteRoleId type = "Scope" } ] } ] } response_export_values = { appId = "appId" } } resource "msgraph_resource" "servicePrincipal_application" { url = "servicePrincipals" body = { appId = msgraph_resource.application.output.appId } response_export_values = { all = "@" } } resource "msgraph_resource" "appRoleAssignment" { url = "servicePrincipals/${local.MSGraphServicePrincipalId}/appRoleAssignments" body = { appRoleId = local.userReadAllAppRoleId principalId = local.TestApplicationServicePrincipalId resourceId = local.MSGraphServicePrincipalId } } SharePoint & Outlook Notifications With your service principals properly configured, you can set up M365 endpoint workflows such an outlook notification template list as shown below. The actual service principal setup has been omitted from this code sample for the sake of brevity, but you will need Sites.Manage.All, Sites.ReadWrite.All, User.Read, and User.Read.All permissions for this example to work: data "msgraph_resource" "sharepoint_site_by_path" { url = "sites/microsoft.sharepoint.com:/sites/msgraphtest:" response_export_values = { full_response = "@" site_id = "id || ''" } } resource "msgraph_resource" "notification_templates_list" { url = "sites/${msgraph_resource.sharepoint_site_by_path.output.site_id}/lists" body = { displayName = "DevOps Notification Templates" description = "Centrally managed email templates for DevOps automation" template = "genericList" columns = [ { name = "TemplateName" text = { allowMultipleLines = false appendChangesToExistingText = false linesForEditing = 1 maxLength = 255 } }, { name = "Subject" text = { allowMultipleLines = false appendChangesToExistingText = false linesForEditing = 1 maxLength = 500 } }, { name = "HtmlBody" text = { allowMultipleLines = true appendChangesToExistingText = false linesForEditing = 10 maxLength = 10000 } }, { name = "Recipients" text = { allowMultipleLines = true appendChangesToExistingText = false linesForEditing = 3 maxLength = 1000 } }, { name = "TriggerConditions" text = { allowMultipleLines = true appendChangesToExistingText = false linesForEditing = 5 maxLength = 2000 } } ] } response_export_values = { list_id = "id" list_name = "displayName" web_url = "webUrl" } } The MSGraph provider is to AzureAD as the AzAPI provider is to AzureRM. Since support for resource types is automatic, you can access the latest features and functionality as soon as they're released via the provider. AzureAD will continue to serve as the convenience layer implementation of a subset of Entra APIs. We invite you to try the new provider today: - Deploy your first msgraph resources - Check out the registry page - Visit the provider GitHub Introducing the Microsoft Terraform VSCode Extension The new official Microsoft Terraform extension for Visual Studio Code consolidates AzureRM, AzAPI, and MSGraph VSCode support into a single powerful extension. The extension supports exporting Azure resources as Terraform code, as well as IntelliSense, syntax highlighting, and code sample generation. It replaces the Azure Terraform and AzAPI VSCode extensions and adds some new features. Installation & Migration New users can install the extension by searching “Microsoft Terraform” within Visual Studio Marketplace or their “Extensions” tab. Users can also click this link to the Visual Studio marketplace. Users of the “Azure Terraform” extension can navigate to “Extensions” tab and selecting the old extension. Select the “Migrate” button to move to the new extension. Users of the “Terraform AzAPI Provider” extension will be directed to the new extension: New Features Export Azure Resources As Terraform This feature allows you to export existing Azure resources as Terraform configuration blocks using Azure Export for Terraform. This helps you migrate existing Azure resources to Terraform-managed infrastructure. Open the Command Palette (Command+Shift+P on macOS and Ctrl+Shift+P on Windows/Linux). Search for and select the command Microsoft Terraform: Export Azure Resource as Terraform. Follow the prompts to select the Azure subscription and resource group containing the resources you want to export. Select the azurerm provider or the azapi provider to export the resources. The extension will generate the Terraform configuration blocks for the selected resources and display them in a new editor tab. Support for MSGraph The new extension comes fully equipped with intellisense, code completion, and code samples just like the AzAPI provider. See the next section for recorded examples of these features within the AzureRM & AzAPI providers. Preexisting Features Intelligent Code Completion: Benefit from context-aware suggestions, like property names or resource types. Code Samples: Quickly insert code samples for your resources: Paste as AzAPI: Copy your existing resource JSON or ARM Templates into VSCode with the Microsoft Terraform extension, and it will automatically convert your code into AzAPI. The below example takes a resource JSON from the Azure Portal and pastes it into VSCode as AzAPI: Migrate AzureRM to AzAPI: Move existing AzureRM code to the AzAPI provider whenever you wish to. Read more in the Guide to migrate AzureRM resources to AzAPI Feedback We value your feedback! You can share your experience with the Microsoft Terraform extension by running the command Microsoft Terraform: Show Survey from the Command Palette. Your input helps us improve the extension and better serve your needs. Conclusion Whether you are managing traditional Azure resources, modern Microsoft Graph environments, or a combination of both, the new MSGraph provider and Microsoft Terraform VS Code extension are designed to help you deliver robust, reliable infrastructure—faster and with greater confidence. Stay tuned for further updates, workshops, and community events as we continue to evolve these offerings. Your feedback and participation are invaluable as we build the next generation of infrastructure automation together.4.5KViews5likes2CommentsStrategic Solutions for Seamless Integration of Third-Party SaaS
Modern systems must be modular and interoperable by design. Integration is no longer a feature, it’s a requirement. Developers are expected to build architectures that connect easily with third-party platforms, but too often, core systems are designed in isolation. This disconnect creates friction for downstream teams and slows delivery. At Microsoft, SaaS platforms like SAP SuccessFactors and Eightfold support Talent Acquisition by handling functions such as requisition tracking, application workflows, and interview coordination. These tools help reduce costs and free up engineering focus for high-priority areas like Azure and AI. The real challenge is integrating them with internal systems such as Demand Planning, Offer Management, and Employee Central. This blog post outlines a strategy centered around two foundational components: an Integration and Orchestration Layer, and a Messaging Platform. Together, these enable real-time communication, consistent data models, and scalable integration. While Talent Acquisition is the use case here, the architectural patterns apply broadly across domains. Whether you're embedding AI pipelines, managing edge deployments, or building platform services, thoughtful integration needs to be built into the foundation, not bolted on later.JS AI Build‑a‑thon: Wrapping Up an Epic June 2025!
After weeks of building, testing, and learning — we’re officially wrapping up the first-ever JS AI Build-a-thon 🎉. This wasn't your average coding challenge. This was a hands-on journey where JavaScript and TypeScript developers dove deep into real-world AI concepts — from local GenAI prototyping to building intelligent agents and deploying production-ready apps. Whether you joined from the start or hopped on midway, you built something that matters — and that’s worth celebrating. Replay the Journey No worries if you joined late or want to revisit any part of the journey. The JS AI Build-a-thon was designed to let you learn at your own pace, so whether you're starting now or polishing up your final project, here’s your complete quest map: Build-a-thon set up guide: https://aka.ms/JSAIBuildathonSetup Quest 1: 🔧 Build your first GenAI app locally with GitHub Models 👉🏽 https://aka.ms/JSAIBuildathonQuest1 Quest 2: ☁️ Move your AI prototype to Azure AI Foundry 👉🏽 https://aka.ms/JSAIBuildathonQuest Quest 3: 🎨 Add a chat UI using Vite + Lit 👉🏽 https://aka.ms/JSAIBuildathonQuest3 Quest 4: 📄 Enhance your app with RAG (Chat with Your Data) 👉🏽 https://aka.ms/JSAIBuildathonQuest4 Quest 5: 🧠 Add memory and context to your AI app 👉🏽 https://aka.ms/JSAIBuildathonQuest5 Quest 6: ⚙️ Build your first AI Agent using AI Foundry 👉🏽 https://aka.ms/JSAIBuildathonQuest6 Quest 7: 🧩 Equip your agent with tools from an MCP server 👉🏽 https://aka.ms/JSAIBuildathonQuest7 Quest 8: 💬 Ground your agent with real-time search using Bing 👉🏽 https://aka.ms/JSAIBuildathonQuest8 Quest 9: 🚀 Build a real-world AI project with full-stack templates 👉🏽 https://aka.ms/JSAIBuildathonQuest9 Link to our space in the AI Discord Community: https://aka.ms/JSAIonDiscord Project Submission Guidelines 📌 Quest 9 is where it all comes together. Participants chose a problem, picked a template, customized it, submitted it, and rallied their community for support! 🏅 Claim Your Badge! Whether you completed select quests or went all the way, we celebrate your learning. If you participated in the June 2025 JS AI Build-a-thon, make sure to Submit the Participation Form to receive your participation badge recognizing your commitment to upskilling in AI with JavaScript/ TypeScript. What’s Next? We’re not done. In fact, we’re just getting started. We’re already cooking up JS AI Build-a-thon v2, which will introduce: Running everything locally with Foundry Local Real-world RAG with vector databases Advanced agent patterns with remote MCPs And much more based on your feedback Want to shape what comes next? Drop your ideas in the participation form and in our Discord. In the meantime, add these resources to your JavaScript + AI Dev Pack: 🔗 Microsoft for JavaScript developers 📚 Generative AI for Beginners with JavaScript Wrap-Up This build-a-thon showed what’s possible when developers are empowered to learn by doing. You didn’t just follow tutorials — you shipped features, connected services, and created working AI experiences. We can’t wait to see what you build next. 👉 Bookmark the repo 👉 Join the community on Join the Azure AI Foundry Discord Server! 👉 Stay building Until next time — keep coding, keep shipping!Quest 4 - I want to connect my AI prototype to external data using RAG
In Quest 4 of the JS AI Build-a-thon, you’ll integrate Retrieval-Augmented Generation (RAG) to give your AI apps access to external data like PDFs. You’ll explore embeddings, vector stores, and how to use the pdf-parse library in JavaScript to build more context-aware apps — with challenges to push you even further.