<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>rss.livelink.threads-in-node</title>
    <link>https://techcommunity.microsoft.com/t5/education-sector/ct-p/EducationSector</link>
    <description>rss.livelink.threads-in-node</description>
    <pubDate>Thu, 30 Apr 2026 11:10:33 GMT</pubDate>
    <dc:creator>EducationSector</dc:creator>
    <dc:date>2026-04-30T11:10:33Z</dc:date>
    <item>
      <title>Hands-on Session: From idea to interactive lesson with Microsoft Learning Zone</title>
      <link>https://techcommunity.microsoft.com/t5/education-blog/hands-on-session-from-idea-to-interactive-lesson-with-microsoft/ba-p/4515667</link>
      <description>&lt;P&gt;Join us on &lt;STRONG&gt;Tuesday, May 12th at 8:00 AM Pacific&lt;/STRONG&gt; for a hands-on professional development session introducing &lt;STRONG&gt;Learning Zone&lt;/STRONG&gt; - a new app that helps you create interactive, classroom-ready lessons in minutes. In this&amp;nbsp;45-minute webinar, the Product Management team will guide you through core capabilities and the latest updates. You can follow along using your own Microsoft 365 Education account.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;What we will cover:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;✅ &lt;STRONG&gt;Getting started with Learning Zone:&lt;/STRONG&gt; Access Learning Zone and get set up&lt;/P&gt;
&lt;P&gt;✅ &lt;STRONG&gt;Experience as a student:&lt;/STRONG&gt; Join a session and see how it works from the student perspective&lt;/P&gt;
&lt;P&gt;✅ &lt;STRONG&gt;Building your first interactive lesson: &lt;/STRONG&gt;Create your first interactive lesson (in minutes!)&amp;nbsp;&lt;/P&gt;
&lt;P&gt;✅ &lt;STRONG&gt;Assigning to your class:&lt;/STRONG&gt; Send lessons via link, short code, Teams Assignments, or your LMS&lt;/P&gt;
&lt;P&gt;✅ &lt;STRONG&gt;Exploring the ready-to-learn library: &lt;/STRONG&gt;Bring immediate value to your students through a variety of lessons by trusted of partners.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Important note: &lt;/STRONG&gt;Lesson generation is currently available &lt;STRONG&gt;only on Copilot+ PCs&lt;/STRONG&gt; with any Microsoft 365 Education license (supported in English and Spanish). &lt;STRONG&gt;No Copilot+ PC?&lt;/STRONG&gt; No problem. You’ll still get to try out the student experience, learn how to use the lesson library, assign interactive lessons, review insights, and integrate Learning Zone into your existing workflows.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;📅 &lt;STRONG&gt;Date:&lt;/STRONG&gt; Tuesday, May 12th&lt;BR /&gt;⏰ &lt;STRONG&gt;Time:&lt;/STRONG&gt; 8:00 AM Pacific&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Register:&amp;nbsp;&lt;A class="lia-external-url" href="https://aka.ms/LZwebinarMay26" target="_blank" rel="noopener"&gt;https://aka.ms/LZwebinarMay26&lt;/A&gt;&lt;/STRONG&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;We look forward to having you attend the event!&lt;/P&gt;</description>
      <pubDate>Wed, 29 Apr 2026 15:15:15 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/education-blog/hands-on-session-from-idea-to-interactive-lesson-with-microsoft/ba-p/4515667</guid>
      <dc:creator>MikeTholfsen</dc:creator>
      <dc:date>2026-04-29T15:15:15Z</dc:date>
    </item>
    <item>
      <title>microsoft execel</title>
      <link>https://techcommunity.microsoft.com/t5/education/microsoft-execel/m-p/4515875#M906</link>
      <description>&lt;P&gt;&lt;EM&gt;please members am new here any one can help me in microsoft word and execel&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 29 Apr 2026 12:42:19 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/education/microsoft-execel/m-p/4515875#M906</guid>
      <dc:creator>stuartabaho</dc:creator>
      <dc:date>2026-04-29T12:42:19Z</dc:date>
    </item>
    <item>
      <title>Build AI RAG Apps with LangChain, Azure DocumentDB and Microsoft Foundry: Step-by-Step Guide</title>
      <link>https://techcommunity.microsoft.com/t5/educator-developer-blog/build-ai-rag-apps-with-langchain-azure-documentdb-and-microsoft/ba-p/4513775</link>
      <description>&lt;H3&gt;Scenario&lt;/H3&gt;
&lt;P data-start="0" data-end="694" data-is-last-node="" data-is-only-node=""&gt;Imagine you are building your company’s RAG chat application using &lt;STRONG data-start="67" data-end="91"&gt;Microsoft Foundry - Azure OpenAI&lt;/STRONG&gt;&amp;nbsp;and orchestrating the flow with &lt;STRONG data-start="124" data-end="165"&gt;LangChain&lt;/STRONG&gt;. The chat experience works, but now it needs to be grounded in your company’s data. You generate embeddings and want to store and query them without adding another database or complex sync pipeline. Instead of stitching services together, you use &lt;STRONG data-start="413" data-end="462"&gt;Azure DocumentDB (with MongoDB compatibility)&lt;/STRONG&gt; with built-in vector search to store your JSON data and embeddings in one place. You deploy the app to &lt;STRONG data-start="566" data-end="587"&gt;Azure App Service&lt;/STRONG&gt; and quickly compare vector search alone versus a full RAG pipeline, sharing it with your team for testing.&lt;/P&gt;
&lt;H3 id="what-will-you-learn"&gt;What will you learn?&lt;/H3&gt;
&lt;P&gt;In this blog, you'll learn to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Create an Azure DocumentDB (with MongoDB compatibility) resource.&lt;/LI&gt;
&lt;LI&gt;Create an embeddings and a chat deployment in Microsoft Foundry Azure OpenAI portal.&lt;/LI&gt;
&lt;LI&gt;Create an Azure App Service website with continuous deployment from GitHub.&lt;/LI&gt;
&lt;LI&gt;Configure Azure App Service application settings to enable communication between Azure resources.&lt;/LI&gt;
&lt;LI&gt;Configure GitHub workflow to work successfully.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;What is the main objective?&lt;/H3&gt;
&lt;P&gt;Build AI Powered RAG Application using LangChain, Microsoft Foundry Azure OpenAI, and Azure DocumentDB (with MongoDB compatibility&lt;SPAN style="font-style: var(--lia-blog-font-style); font-family: var(--lia-blog-font-family); font-size: var(--lia-bs-font-size-base);"&gt;): Step-by-Step Guide&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;H3&gt;Prerequisites&lt;/H3&gt;
&lt;UL&gt;
&lt;LI class="graf graf--p"&gt;An Azure subscription.
&lt;UL&gt;
&lt;LI&gt;If you don’t already have one, you can sign up for an&amp;nbsp;&lt;A class="markup--anchor markup--li-anchor" title="Sign up for an Azure free account" href="https://azure.microsoft.com/?wt.mc_id=studentamb_71460" target="_blank" rel="noopener noreferrer" data-href="https://azure.microsoft.com/"&gt;Azure free account&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;For students, you can use the free&amp;nbsp;&lt;A class="markup--anchor markup--li-anchor" href="https://aka.ms/Azure4StudentsActivate" target="_blank" rel="noopener noreferrer" data-href="https://aka.ms/Azure4StudentsActivate"&gt;Azure for Students offer&lt;/A&gt;&amp;nbsp;which doesn’t require a credit card only your school email.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;A GitHub account.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Summary of the steps:&lt;/H3&gt;
&lt;UL&gt;
&lt;LI data-unlink="true"&gt;Step 1: Create an Azure DocumentDB (with MongoDB compatibility) resource&lt;/LI&gt;
&lt;LI data-unlink="true"&gt;Step 2: Create a Microsoft Foundry - Azure OpenAI resource and Deploy chat and embedding Models&lt;/LI&gt;
&lt;LI data-unlink="true"&gt;Step 3: Create an Azure App Service and Deploy the RAG Chat Application&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2 id="h_3686287661702581784350"&gt;Step 1: Create an Azure DocumentDB (with MongoDB compatibility) resource&lt;/H2&gt;
&lt;P&gt;In this step, you'll:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Open the&amp;nbsp;Azure Portal.&lt;/LI&gt;
&lt;LI&gt;Create an Azure DocumentDB (with MongoDB compatibility) resource.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 id="toc-hId-1515534585"&gt;Open the Azure Portal&lt;/H3&gt;
&lt;P class="graf graf--p"&gt;1. Visit the Azure Portal&amp;nbsp;&lt;A class="markup--anchor markup--p-anchor" href="https://portal.azure.com/?wt.mc_id=studentamb_71460" target="_blank" rel="noopener nofollow noreferrer" data-href="https://portal.azure.com"&gt;https://portal.azure.com&lt;/A&gt;&amp;nbsp;in your browser&amp;nbsp;and&amp;nbsp;sign in.&lt;/P&gt;
&lt;FIGURE class="graf graf--figure"&gt;&lt;img /&gt;&lt;/FIGURE&gt;
&lt;P class="graf graf--p"&gt;Now you are inside the&amp;nbsp;&lt;STRONG class="markup--strong markup--p-strong"&gt;Azure portal&lt;/STRONG&gt;!&lt;/P&gt;
&lt;FIGURE class="graf graf--figure"&gt;&lt;img /&gt;
&lt;P class="lia-clear-both"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 id="toc-hId--291919878"&gt;Create a new Azure DocumentDB (with MongoDB compatibility) resource&lt;/H3&gt;
&lt;P&gt;In this step, you create an Azure DocumentDB (with MongoDB compatibility) resource to store your data, vector embedding, and perform vector search.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;1.&amp;nbsp;Type&amp;nbsp;&lt;EM&gt;documentdb&lt;/EM&gt;&amp;nbsp;in the&amp;nbsp;&lt;STRONG&gt;search bar&lt;/STRONG&gt;&amp;nbsp;at the top of the portal page and select&amp;nbsp;&lt;STRONG&gt;Azure DocumentDB (with MongoDB compatibility)&amp;nbsp;&lt;/STRONG&gt;from the available options.&lt;/P&gt;
&lt;/FIGURE&gt;
&lt;img /&gt;
&lt;FIGURE class="graf graf--figure"&gt;
&lt;P&gt;2. Select&amp;nbsp;&lt;STRONG&gt;Create&amp;nbsp;&lt;/STRONG&gt;from the toolbar to start provisioning your new cluster.&lt;/P&gt;
&lt;/FIGURE&gt;
&lt;img /&gt;
&lt;FIGURE class="graf graf--figure"&gt;
&lt;P&gt;3. Add the following information to create a resource:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="width: 100%; height: 258px; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr style="height: 35px;"&gt;&lt;td style="height: 35px;"&gt;&lt;STRONG&gt;What&lt;/STRONG&gt;&lt;/td&gt;&lt;td style="height: 35px;"&gt;&lt;STRONG&gt;Value&lt;/STRONG&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 59px;"&gt;&lt;td style="height: 59px;"&gt;Subscription&lt;/td&gt;&lt;td style="height: 59px;"&gt;Use your preferred subscription. It's advised to use the same subscription across all the resources that communicate with each other on Azure.&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 35px;"&gt;&lt;td style="height: 35px;"&gt;Resource group&lt;/td&gt;&lt;td style="height: 35px;"&gt;Select &lt;STRONG&gt;Create new&amp;nbsp;&lt;/STRONG&gt;to create a new resource group. Enter a unique name for the resource group.&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 35px;"&gt;&lt;td style="height: 35px;"&gt;Cluster name&lt;/td&gt;&lt;td style="height: 35px;"&gt;Enter a globally unique name.&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 35px;"&gt;&lt;td style="height: 35px;"&gt;Location&lt;/td&gt;&lt;td style="height: 35px;"&gt;Select a region close to you for the best response time. For example, Select&amp;nbsp;&lt;STRONG&gt;UK South&lt;/STRONG&gt;.&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 59px;"&gt;&lt;td style="height: 59px;"&gt;MongoDB version&lt;/td&gt;&lt;td style="height: 59px;"&gt;Select the latest available version of MongoDB&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/FIGURE&gt;
&lt;img /&gt;
&lt;FIGURE class="graf graf--figure"&gt;
&lt;P&gt;4. Select&amp;nbsp;&lt;STRONG&gt;Configure&lt;/STRONG&gt; to configure your cluster tier.&lt;/P&gt;
&lt;P&gt;5.&amp;nbsp;Add the following information to configure the cluster tier. You can scale it up later:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;&lt;STRONG&gt;What&lt;/STRONG&gt;&lt;/td&gt;&lt;td&gt;&lt;STRONG&gt;Value&lt;/STRONG&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Cluster tier&lt;/td&gt;&lt;td&gt;Select &lt;STRONG&gt;M25 &lt;/STRONG&gt;tier, 2 (Burstable) vCores.&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Storage&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Select &lt;STRONG&gt;32 GiB&lt;/STRONG&gt;.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;6. Select &lt;STRONG&gt;Save&lt;/STRONG&gt;.&lt;/P&gt;
&lt;/FIGURE&gt;
&lt;img /&gt;
&lt;FIGURE class="graf graf--figure"&gt;
&lt;P&gt;7. Enter the cluster &lt;STRONG&gt;Admin&lt;/STRONG&gt;&amp;nbsp;&lt;STRONG&gt;Username&lt;/STRONG&gt; and &lt;STRONG&gt;Password&lt;/STRONG&gt; and store them in a secure location.&lt;/P&gt;
&lt;P&gt;8. Select &lt;STRONG&gt;Next&lt;/STRONG&gt; to configure the networking settings.&lt;/P&gt;
&lt;/FIGURE&gt;
&lt;img /&gt;
&lt;FIGURE class="graf graf--figure"&gt;
&lt;P&gt;9. Select &lt;STRONG&gt;Allow Public Access from Azure&lt;/STRONG&gt; services and resources within the Azure to this cluster.&lt;/P&gt;
&lt;P&gt;10. Select &lt;STRONG&gt;Add current IP address&lt;/STRONG&gt; to the firewall rules to allow local access to the cluster.&lt;/P&gt;
&lt;P&gt;11. Select&lt;STRONG&gt; Review + create&lt;/STRONG&gt;.&lt;/P&gt;
&lt;/FIGURE&gt;
&lt;img /&gt;
&lt;FIGURE class="graf graf--figure"&gt;
&lt;P&gt;12.&amp;nbsp;Confirm your configuration settings and select&amp;nbsp;&lt;STRONG&gt;Create&lt;/STRONG&gt; to start provisioning the resource.&lt;/P&gt;
&lt;P&gt;Note: The cluster creation can take up to 10 minutes. It's recommended to move on with the rest of the steps and get back to it later.&lt;/P&gt;
&lt;/FIGURE&gt;
&lt;H2 id="h_282699227321702581811866"&gt;Step 2: Create a Microsoft Foundry - Azure OpenAI resource and Deploy chat and embedding Models&lt;/H2&gt;
&lt;P&gt;In this step, you'll:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Create a Microsoft Foundry Azure OpenAI resource.&lt;/LI&gt;
&lt;LI&gt;Create chat and embedding model deployments.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Create an Azure OpenAI resource&lt;/H3&gt;
&lt;P&gt;In this step, you create an Azure OpenAI Service resource that enables you to interact with different large language models (LLMs).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;1.&amp;nbsp;Type&amp;nbsp;&lt;EM&gt;openai&lt;/EM&gt;&amp;nbsp;in the&amp;nbsp;&lt;STRONG&gt;search bar&lt;/STRONG&gt;&amp;nbsp;at the top of the portal page and select&amp;nbsp;&lt;STRONG&gt;Azure OpenAI&amp;nbsp;&lt;/STRONG&gt;from the available options.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;2. Select &lt;STRONG&gt;Create&lt;/STRONG&gt; from the toolbar then select Azure OpenAI to provision a new Azure OpenAI resource.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;3. Add the following information to create a resource:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;&lt;STRONG&gt;What&lt;/STRONG&gt;&lt;/td&gt;&lt;td&gt;&lt;STRONG&gt;Value&lt;/STRONG&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Subscription&lt;/td&gt;&lt;td&gt;Use the same subscription you used to apply for Azure OpenAI access.&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Resource group&lt;/td&gt;&lt;td&gt;Use the resource group you created in the previous step.&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Region&lt;/td&gt;&lt;td&gt;Select a region close to you for the best response time. For example, Select&amp;nbsp;&lt;STRONG&gt;UK South&lt;/STRONG&gt;.&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Name&lt;/td&gt;&lt;td&gt;Enter a globally unique name.&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Pricing tier&lt;/td&gt;&lt;td&gt;
&lt;DIV class="has-inner-focus"&gt;&amp;nbsp;Select &lt;STRONG&gt;S0&lt;/STRONG&gt;. Currently, this is the only available pricing tier.&lt;/DIV&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;img /&gt;
&lt;P&gt;4. Now that the basic information is added, select&amp;nbsp;&lt;STRONG&gt;Next&lt;/STRONG&gt;&amp;nbsp;to confirm your details and proceed to the next page.&lt;/P&gt;
&lt;P&gt;5. Select&amp;nbsp;&lt;STRONG&gt;Next&lt;/STRONG&gt;&amp;nbsp;to confirm your network details.&lt;/P&gt;
&lt;P&gt;6. Select&amp;nbsp;&lt;STRONG&gt;Next&lt;/STRONG&gt; to confirm your tag details.&lt;/P&gt;
&lt;P&gt;7. Confirm your configuration settings and select&amp;nbsp;&lt;STRONG&gt;Create&lt;/STRONG&gt; to start provisioning the resource. Wait for the deployment to finish.&lt;/P&gt;
&lt;P&gt;8. After the deployment finishes, select&amp;nbsp;&lt;STRONG&gt;Go to resource&lt;/STRONG&gt; to inspect your created resource. Here, you can manage your resource and find important information like the endpoint URL and API keys.&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-clear-both"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Create chat and embedding model deployments&lt;/H3&gt;
&lt;P&gt;In this step, you create an Azure OpenAI embedding model deployment and a chat model deployment.&amp;nbsp;Creating a deployment on your previously provisioned resource allows you to generate text embeddings (i.e. numerical representation for text) and have a natural language conversation with your data.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;1. Select&amp;nbsp;&lt;STRONG&gt;Go to Foundry portal&amp;nbsp;&lt;/STRONG&gt;from the toolbar to open the studio.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;2. Select&amp;nbsp;&lt;STRONG&gt;Deployments&amp;nbsp;&lt;/STRONG&gt;from the &lt;STRONG&gt;Shared resources&lt;/STRONG&gt; left side menu to go to the deployments tab.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;3. Select&amp;nbsp;&lt;STRONG&gt;+ Deploy model &lt;/STRONG&gt;from the toolbar then select&lt;STRONG&gt; Deploy base model&lt;/STRONG&gt; from the options. A&lt;STRONG&gt;&amp;nbsp;Deploy model&lt;/STRONG&gt; window opens.&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-clear-both"&gt;4. Type&amp;nbsp;&lt;EM&gt;gpt-4o-mini&amp;nbsp;&lt;/EM&gt;to search for the model then select it then select&lt;STRONG&gt; Use model&lt;/STRONG&gt;.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;5. Select &lt;STRONG&gt;Continue with existing setup&lt;/STRONG&gt; to proceed to next step.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;6. Refresh page and repeat previous steps to select the model then select &lt;STRONG&gt;Confirm&lt;/STRONG&gt;.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;7. Review selected options then select &lt;STRONG&gt;Deploy&lt;/STRONG&gt;.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;8. Select&amp;nbsp;&lt;STRONG&gt;+ Deploy model &lt;/STRONG&gt;from the toolbar then select&lt;STRONG&gt; Deploy base model&lt;/STRONG&gt; from the options. A&lt;STRONG&gt;&amp;nbsp;Deploy model&lt;/STRONG&gt; window opens.&lt;/P&gt;
&lt;P&gt;9. Type&amp;nbsp;&lt;EM&gt;text-embedding-3-small &lt;/EM&gt;to search for the model then select it then select&lt;STRONG&gt; Confirm&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;10. Review selected options then select &lt;STRONG&gt;Deploy&lt;/STRONG&gt;.&lt;/P&gt;
&lt;img /&gt;
&lt;H2 id="h_637567739611702581961458"&gt;Step 3: Create an Azure App Service and&amp;nbsp;Deploy the RAG Chat Application&lt;/H2&gt;
&lt;DIV&gt;
&lt;P&gt;In this step, you'll:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Fork the sample repository on GitHub.&lt;/LI&gt;
&lt;LI&gt;Create an Azure App Service resource with a deployment from GitHub.&lt;/LI&gt;
&lt;LI&gt;Modify Azure App Service Application settings in the Azure portal.&lt;/LI&gt;
&lt;LI&gt;Configure the workflow to deploy your application from GitHub.&lt;/LI&gt;
&lt;LI&gt;Test the website before and after adding the data.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Fork the Sample Repository on GitHub&lt;/H3&gt;
&lt;P&gt;In this step, you create a copy from the source code on your GitHub account to be able to edit it and use it later.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;1. Visit the sample&amp;nbsp;&lt;A class="lia-external-url" href="https://github.com/Azure-Samples/Cosmic-Food-RAG-app?wt.mc_id=studentamb_71460" target="_blank" rel="noopener noreferrer" data-href="https://portal.azure.com"&gt;github.com/Azure-Samples/Cosmic-Food-RAG-app&lt;/A&gt; in your browser and sign in.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;2. Select&amp;nbsp;&lt;STRONG&gt;Fork&amp;nbsp;&lt;/STRONG&gt;from the top of the sample page.&lt;/P&gt;
&lt;P&gt;3. Select an owner for the fork then, select&amp;nbsp;&lt;STRONG&gt;Create fork&lt;/STRONG&gt;.&lt;/P&gt;
&lt;img /&gt;
&lt;H3&gt;Create an Azure App Service resource with a deployment from GitHub&lt;/H3&gt;
&lt;P&gt;In this step, you create an Azure App service resource and connect it with your GitHub account to deploy a Python application.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;1.&amp;nbsp;Type &lt;EM&gt;app service&amp;nbsp;&lt;/EM&gt;in the&amp;nbsp;&lt;STRONG&gt;search bar&lt;/STRONG&gt;&amp;nbsp;at the top of the portal page and select&amp;nbsp;&lt;STRONG&gt;App Services&amp;nbsp;&lt;/STRONG&gt;from the available options.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;2. Select &lt;STRONG&gt;Create Web App&lt;/STRONG&gt; from the toolbar to start provisioning a new web application.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;3.&amp;nbsp;&amp;nbsp;Add the following information to fill in the basic configuration of the application:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;&lt;STRONG&gt;What&lt;/STRONG&gt;&lt;/td&gt;&lt;td&gt;&lt;STRONG&gt;Value&lt;/STRONG&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Subscription&lt;/td&gt;&lt;td&gt;Use the same subscription you used to apply for Azure OpenAI access.&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Resource group&lt;/td&gt;&lt;td&gt;Use the same resource group you created before.&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Name&lt;/td&gt;&lt;td&gt;Enter a unique name for your website. For example,&amp;nbsp;&lt;STRONG&gt;cosmic-food-rag&lt;/STRONG&gt;.&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Publish?&lt;/td&gt;&lt;td&gt;Select&amp;nbsp;&lt;STRONG&gt;Code&lt;/STRONG&gt;. This option specifies whether your deployment consists of code or a container.&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Runtime stack&lt;/td&gt;&lt;td&gt;Select &lt;STRONG&gt;Python 3.12&lt;/STRONG&gt;.&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Operating System&lt;/td&gt;&lt;td&gt;Select&amp;nbsp;&lt;STRONG&gt;Linux&lt;/STRONG&gt;.&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Region&lt;/td&gt;&lt;td&gt;Select&amp;nbsp;&lt;STRONG&gt;UK South&lt;/STRONG&gt;. This is the region where the rest of the resources you created reside.&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;img /&gt;&lt;/DIV&gt;
&lt;P&gt;4. Add the following information to create the app service plan. You can scale it up later:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="width: 62.87037%; height: 133px; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr style="height: 35px;"&gt;&lt;td style="height: 35px;"&gt;&lt;STRONG&gt;What&lt;/STRONG&gt;&lt;/td&gt;&lt;td style="height: 35px;"&gt;&lt;STRONG&gt;Value&lt;/STRONG&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 59px;"&gt;&lt;td style="height: 59px;"&gt;Linux Plan&lt;/td&gt;&lt;td style="height: 59px;"&gt;Select a pre-existing plan or create a new plan.&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 39px;"&gt;&lt;td style="height: 39px;"&gt;Pricing Plan&lt;/td&gt;&lt;td style="height: 39px;"&gt;
&lt;P&gt;&amp;nbsp;Select &lt;STRONG&gt;Basic B1&lt;/STRONG&gt;.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 49.926362%" /&gt;&lt;col style="width: 49.926362%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;img /&gt;&lt;/DIV&gt;
&lt;P&gt;5. Select &lt;STRONG&gt;Deployment&lt;/STRONG&gt; from the toolbar to move to the deployment configuration tab.&lt;/P&gt;
&lt;P&gt;6. Add the following information to enable continuous deployment from GitHub:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="width: 100%; height: 234px; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr style="height: 35px;"&gt;&lt;td style="height: 35px;"&gt;What&lt;/td&gt;&lt;td style="height: 35px;"&gt;Value&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 35px;"&gt;&lt;td style="height: 35px;"&gt;Continuous deployment&lt;/td&gt;&lt;td style="height: 35px;"&gt;Select&lt;STRONG&gt;&amp;nbsp;Enable&lt;/STRONG&gt;.&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 35px;"&gt;&lt;td style="height: 35px;"&gt;GitHub account&lt;/td&gt;&lt;td style="height: 35px;"&gt;Select your GitHub account.&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 59px;"&gt;&lt;td style="height: 59px;"&gt;Organization&lt;/td&gt;&lt;td style="height: 59px;"&gt;Select your organization. If you are using your personal account then select it.&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 35px;"&gt;&lt;td style="height: 35px;"&gt;Repository&lt;/td&gt;&lt;td style="height: 35px;"&gt;Select&lt;STRONG&gt; Cosmic-Food-RAG-app&lt;/STRONG&gt;.&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 35px;"&gt;&lt;td style="height: 35px;"&gt;Branch&lt;/td&gt;&lt;td style="height: 35px;"&gt;Select&amp;nbsp;&lt;STRONG&gt;main&lt;/STRONG&gt;.&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;img /&gt;&lt;/DIV&gt;
&lt;P&gt;7. Select &lt;STRONG&gt;Review + create&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;8. Confirm your configuration settings and select&amp;nbsp;&lt;STRONG&gt;Create&lt;/STRONG&gt; to start provisioning the resource. Wait for the deployment to finish.&lt;/P&gt;
&lt;P&gt;9. After the deployment finishes, select&amp;nbsp;&lt;STRONG&gt;Go to resource&lt;/STRONG&gt; to inspect your created resource. Here, you can manage your resource and find important information like the application settings and logs.&lt;/P&gt;
&lt;img /&gt;
&lt;H3 id="toc-hId--503538492"&gt;Modify Azure App service Application settings in the Azure portal&lt;/H3&gt;
&amp;nbsp;In this step, you configure the Application settings to make the website able to communicate with other cloud resources.&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;1. In the Web App resource, select&amp;nbsp;&lt;STRONG&gt;Environment variables&lt;/STRONG&gt;&amp;nbsp;from the left side menu.&lt;/DIV&gt;
&lt;DIV&gt;&lt;img /&gt;
&lt;P class="lia-clear-both"&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;DIV&gt;2. Select&amp;nbsp;&lt;STRONG&gt;+ Add&lt;/STRONG&gt;&amp;nbsp;to add new environment variables to the function configuration.&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;3. Add the following names and values one by one and select&amp;nbsp;&lt;STRONG&gt;Ok&lt;/STRONG&gt;. Make sure to add your own values.&lt;/DIV&gt;
&lt;DIV&gt;These application settings are for the Azure OpenAI resources that you created:&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="width: 77.12963%; height: 210px; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr style="height: 35px;"&gt;&lt;td style="height: 35px;"&gt;&lt;STRONG&gt;What&lt;/STRONG&gt;&lt;/td&gt;&lt;td style="height: 35px;"&gt;&lt;STRONG&gt;Value&lt;/STRONG&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;OPENAI_API_VERSION&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;2024-10-21&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 35px;"&gt;&lt;td style="height: 35px;"&gt;AZURE_OPENAI_CHAT_DEPLOYMENT_NAME&lt;/td&gt;&lt;td style="height: 35px;"&gt;
&lt;P&gt;gpt-4o-mini&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;AZURE_OPENAI_CHAT_MODEL_NAME&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;gpt-4o-mini&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 35px;"&gt;&lt;td style="height: 35px;"&gt;AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME&lt;/td&gt;&lt;td style="height: 35px;"&gt;
&lt;P&gt;text-embedding-3-small&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;AZURE_OPENAI_EMBEDDINGS_MODEL_NAME&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;text-embedding-3-small&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;AZURE_OPENAI_EMBEDDINGS_DIMENSIONS&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;1536&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 35px;"&gt;&lt;td style="height: 35px;"&gt;AZURE_OPENAI_DEPLOYMENT_NAME&lt;/td&gt;&lt;td style="height: 35px;"&gt;&amp;lt;azureOpenAiResourceName&amp;gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 35px;"&gt;&lt;td style="height: 35px;"&gt;AZURE_OPENAI_ENDPOINT&lt;/td&gt;&lt;td style="height: 35px;"&gt;https://&amp;lt;azureOpenAiResourceName&amp;gt;.openai.azure.com/&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 35px;"&gt;&lt;td style="height: 35px;"&gt;AZURE_OPENAI_API_KEY&lt;/td&gt;&lt;td style="height: 35px;"&gt;&amp;lt;azureOpenAiResourceKey&amp;gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 49.939976%" /&gt;&lt;col style="width: 49.939976%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;You can get the Azure OpenAI key from the Azure OpenAI resource page.&lt;/DIV&gt;
&lt;DIV&gt;Select &lt;STRONG&gt;Keys and Endpoint&lt;/STRONG&gt;&amp;nbsp; from the &lt;STRONG&gt;Resource Management&lt;/STRONG&gt; section and copy any of the available keys.&lt;/DIV&gt;
&lt;DIV&gt;&lt;img /&gt;&lt;/DIV&gt;
&lt;DIV&gt;These application settings are for Azure DocumentDB (with MongoDB compatibility):&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="width: 100%; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;AZURE_COSMOS_USERNAME&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;&amp;lt;documentUsername&amp;gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;AZURE_COSMOS_PASSWORD&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;&amp;lt;documentPassword&amp;gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;AZURE_COSMOS_CONNECTION_STRING&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;mongodb+srv://&amp;lt;user&amp;gt;:&amp;lt;password&amp;gt;@&amp;lt;clusterName&amp;gt;.global.mongocluster.cosmos.azure.com/?tls=true&amp;amp;authMechanism=SCRAM-SHA-256&amp;amp;retrywrites=false&amp;amp;maxIdleTimeMS=120000&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 30%" /&gt;&lt;col style="width: 70%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;You can get the DocumentDB connection string from the Azure DocumentDB (with MongoDB compatibility) resource page.&lt;/DIV&gt;
&lt;DIV&gt;Select &lt;STRONG&gt;Connection strings&amp;nbsp;&lt;/STRONG&gt;and copy the connection string. Make sure to replace the user and password with the ones you created.&lt;/DIV&gt;
&lt;DIV&gt;&lt;img /&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;These application settings are &lt;STRONG&gt;new&lt;/STRONG&gt; and are used for resources that will be created when the application starts you can use any value for them:&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;AZURE_COSMOS_DATABASE_NAME&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;&amp;lt;documentDatabaseName&amp;gt; ex.&amp;nbsp;
&lt;P&gt;CosmicDB&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;AZURE_COSMOS_COLLECTION_NAME&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;&amp;lt;documentContainerName&amp;gt; ex.&amp;nbsp;
&lt;P&gt;CosmicFoodCollection&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;AZURE_COSMOS_INDEX_NAME&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;&amp;lt;documentIndexName&amp;gt; ex.&amp;nbsp;
&lt;P&gt;CosmicIndex&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;4. Select&amp;nbsp;&lt;STRONG&gt;Apply&amp;nbsp;&lt;/STRONG&gt;to save your newly added environment variables.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;5. Select &lt;STRONG&gt;Configuration &lt;/STRONG&gt;then &lt;STRONG&gt;Stack settings&lt;/STRONG&gt; to edit the application startup command.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;6. Type&amp;nbsp;&lt;EM&gt;entrypoint.sh&lt;/EM&gt; in the startup command field then select &lt;STRONG&gt;Apply&lt;/STRONG&gt;.&lt;/P&gt;
&lt;img /&gt;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;H3&gt;Configure the Workflow to deploy your application from GitHub&lt;/H3&gt;
In this step, you modify the GitHub deployment workflow to point to the folder that contains the application.&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;1. Visit your forked repository on GitHub and notice the failing workflow.&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;2. Open the workflow file&amp;nbsp;&lt;EM&gt;.github/workflows/main_cosmic-food-rag.yml&lt;/EM&gt;.&lt;/DIV&gt;
&lt;DIV&gt;&lt;img /&gt;
&lt;P&gt;3.&amp;nbsp;&amp;nbsp;Open the file and select the pen icon to edit it.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;4. Modify line 41 from &lt;EM&gt;.&lt;/EM&gt; to &lt;EM&gt;src/.&lt;/EM&gt;&lt;/P&gt;
&lt;img /&gt;&lt;/DIV&gt;
&lt;P&gt;5. Remove the optional&amp;nbsp;&lt;STRONG&gt;Local Build Section&amp;nbsp;&lt;/STRONG&gt;since the application already has tests that cover this part.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;6. Add this section to Install Node 22 and build the static frontend.&lt;/P&gt;
&lt;img /&gt;
&lt;DIV&gt;
&lt;P&gt;7. Select &lt;STRONG&gt;Commit changes&lt;/STRONG&gt;, and review your commit message and description. Select&amp;nbsp;&lt;STRONG&gt;Commit changes&lt;/STRONG&gt;.&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-clear-both"&gt;The final workflow file should look like this:&lt;/P&gt;
&lt;LI-CODE lang="yaml"&gt;# Docs for the Azure Web Apps Deploy action: https://github.com/Azure/webapps-deploy
# More GitHub Actions for Azure: https://github.com/Azure/actions
# More info on Python, GitHub Actions, and Azure App Service: https://aka.ms/python-webapps-actions

name: Build and deploy Python app to Azure Web App - cosmic-food-rag

on:
  push:
    branches:
      - main
  workflow_dispatch:

jobs:
  build:
    runs-on: ubuntu-latest
    permissions:
      contents: read #This is required for actions/checkout

    steps:
      - uses: actions/checkout@v4

      - name: Set up Node 22
        uses: actions/setup-node@v6
        with:
          node-version: 22

      - name: Install Node Packages &amp;amp; Build Static Site
        run: cd frontend &amp;amp;&amp;amp; npm install &amp;amp;&amp;amp; npm run build

      # By default, when you enable GitHub CI/CD integration through the Azure portal, the platform automatically sets the SCM_DO_BUILD_DURING_DEPLOYMENT application setting to true. This triggers the use of Oryx, a build engine that handles application compilation and dependency installation (e.g., pip install) directly on the platform during deployment. Hence, we exclude the antenv virtual environment directory from the deployment artifact to reduce the payload size. 
      - name: Upload artifact for deployment jobs
        uses: actions/upload-artifact@v4
        with:
          name: python-app
          path: |
            src/
            !antenv/

      # 🚫 Opting Out of Oryx Build
      # If you prefer to disable the Oryx build process during deployment, follow these steps:
      # 1. Remove the SCM_DO_BUILD_DURING_DEPLOYMENT app setting from your Azure App Service Environment variables.
      # 2. Refer to sample workflows for alternative deployment strategies: https://github.com/Azure/actions-workflow-samples/tree/master/AppService
      

  deploy:
    runs-on: ubuntu-latest
    needs: build
    permissions:
      id-token: write #This is required for requesting the JWT
      contents: read #This is required for actions/checkout

    steps:
      - name: Download artifact from build job
        uses: actions/download-artifact@v4
        with:
          name: python-app
      
      - name: Login to Azure
        uses: azure/login@v2
        with:
          client-id: ${{ secrets.AZUREAPPSERVICE_CLIENTID_5672547ED09F46D59DD431ACF5A29F28 }}
          tenant-id: ${{ secrets.AZUREAPPSERVICE_TENANTID_0059913572C8467882D3999D0E0DD5B8 }}
          subscription-id: ${{ secrets.AZUREAPPSERVICE_SUBSCRIPTIONID_7C42E3352C5D47F084CB0CD14F549D27 }}

      - name: 'Deploy to Azure Web App'
        uses: azure/webapps-deploy@v3
        id: deploy-to-webapp
        with:
          app-name: 'cosmic-food-rag'
          slot-name: 'Production'
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
8. Select &lt;STRONG&gt;Actions&lt;/STRONG&gt; to review the workflow run status.&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;img /&gt;
&lt;H3&gt;Test the website before and After adding the data&lt;/H3&gt;
In this step, you test the application before adding the data, add the data, and test again.&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;1. Select the workflow name to open it and get the website URL.&lt;/DIV&gt;
&lt;DIV&gt;&lt;img /&gt;
&lt;P&gt;2. Select any of the suggested messages or type your own and it should respond with &lt;EM&gt;No results found&lt;/EM&gt;.&lt;/P&gt;
&lt;img /&gt;3. Navigate to your Azure App Service resource page and select &lt;STRONG&gt;SSH&amp;nbsp;&lt;/STRONG&gt;then select Go to open a new SSH page.&lt;/DIV&gt;
&lt;DIV&gt;&lt;img /&gt;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;P&gt;4. In the SSH terminal, run these commands:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;CODE&gt;uv sync --active&lt;/CODE&gt;&lt;/P&gt;
&lt;P&gt;&lt;CODE&gt;uv run --active ./scripts/add_data.py --file="./data/food_items.json"&lt;/CODE&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 100.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;img /&gt;
&lt;P&gt;5. Navigate back to the live website and type in the chat message&lt;EM&gt;&amp;nbsp;Do you have any vegan food dishes?&lt;/EM&gt; and it should respond with the correct answer now.&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Congratulations!! You successfully built the full application.&lt;/P&gt;
&lt;/DIV&gt;
&lt;DIV&gt;
&lt;H2 id="toc-hId-1503395954"&gt;Clean Up&lt;/H2&gt;
&lt;P&gt;Once you finish experimenting on&amp;nbsp;Microsoft Azure you might want to delete the resources to not consume any more money from your subscription.&lt;/P&gt;
&lt;P&gt;You can delete the resource group and it will delete everything inside it or delete the resources one by one that's totally up to you.&lt;/P&gt;
&lt;/DIV&gt;
&lt;H2&gt;Conclusion&lt;/H2&gt;
&lt;P&gt;Congratulations! You've learned how to create an Azure DocumentDB (with MongoDB compatibility) cluster, how to create a Microsoft Foundry - Azure OpenAI resource, how to deploy an embedding model and a chat model from the Foundry portal, how to create an Azure App Service and configure continuous deployment with GitHub, and how to modify application settings to enable the communication across Azure resources. By using these technologies, you can build a RAG chat application with the option to perform vector search too over your own data and provide grounded (relevant) responses.&lt;/P&gt;
&lt;H2&gt;Next steps&lt;/H2&gt;
&lt;H3&gt;Documentation&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/azure/foundry/foundry-models/concepts/models-sold-directly-by-azure?wt.mc_id=studentamb_71460&amp;amp;tabs=global-standard-aoai%2Cglobal-standard&amp;amp;pivots=azure-openai#azure-openai-in-microsoft-foundry-models" target="_blank" rel="noopener"&gt;Azure OpenAI in Microsoft Foundry models&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/azure/ai-services/openai/concepts/understand-embeddings?wt.mc_id=studentamb_71460" target="_blank" rel="noopener"&gt;Understand embeddings in Azure OpenAI in Microsoft Foundry Models (classic)&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/azure/documentdb/overview?wt.mc_id=studentamb_71460" target="_blank" rel="noopener"&gt;Azure DocumentDB (with MongoDB compatibility) documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-gb/azure/documentdb/vector-search?tabs=diskann?wt.mc_id=studentamb_71460" target="_blank" rel="noopener"&gt;Integrated vector store in Azure DocumentDB&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://docs.langchain.com/oss/python/langchain/overview/?wt.mc_id=studentamb_71460" target="_blank" rel="noopener"&gt;LangChain Python documentation&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Training Content&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/training/paths/develop-generative-ai-apps/?wt.mc_id=studentamb_71460" target="_blank" rel="noopener"&gt;Develop generative AI apps in Azure&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="graf graf--p"&gt;Found this useful? Share it with others and follow me to get updates on:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Twitter (&lt;A href="https://twitter.com/john00isaac?wt.mc_id=studentamb_71460" target="_blank" rel="nofollow noopener noreferrer"&gt;twitter.com/john00isaac&lt;/A&gt;)&lt;/LI&gt;
&lt;LI&gt;LinkedIn (&lt;A href="https://www.linkedin.com/in/john0isaac/?wt.mc_id=studentamb_71460" target="_blank" rel="nofollow noopener noreferrer"&gt;linkedin.com/in/john0isaac&lt;/A&gt;)&lt;/LI&gt;
&lt;/UL&gt;
&lt;BLOCKQUOTE class="graf graf--pullquote"&gt;Feel free to share your comments and/or inquiries in the comment section below..
&lt;P class="1702586402308"&gt;See you in future&amp;nbsp;demos!&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;</description>
      <pubDate>Mon, 27 Apr 2026 08:41:21 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/educator-developer-blog/build-ai-rag-apps-with-langchain-azure-documentdb-and-microsoft/ba-p/4513775</guid>
      <dc:creator>JohnAziz</dc:creator>
      <dc:date>2026-04-27T08:41:21Z</dc:date>
    </item>
    <item>
      <title>From Demo to Production: Building Microsoft Foundry Hosted Agents with .NET</title>
      <link>https://techcommunity.microsoft.com/t5/educator-developer-blog/from-demo-to-production-building-microsoft-foundry-hosted-agents/ba-p/4513718</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;The Gap Between a Demo and a Production Agent&lt;/H2&gt;
&lt;P&gt;Let's be honest. Getting an AI agent to work in a demo takes an afternoon. Getting it to work reliably in production, tested, containerised, deployed, observable, and maintainable by a team. is a different problem entirely.&lt;/P&gt;
&lt;P&gt;Most tutorials stop at the point where the agent prints a response in a terminal. They don't show you how to structure your code, cover your tools with tests, wire up CI, or deploy to a managed runtime with a proper lifecycle. That gap between prototype and production is where developer teams lose weeks.&lt;/P&gt;
&lt;P&gt;Microsoft Foundry Hosted Agents close that gap with a managed container runtime for your own custom agent code. And the &lt;A href="https://github.com/microsoft/Hosted_Agents_Workshop_dotNET" target="_blank" rel="noopener"&gt;Hosted Agents Workshop for .NET&lt;/A&gt; gives you a complete, copy-paste-friendly path through the entire journey. from local run to deployed agent to chat UI, in six structured labs using .NET 10.&lt;/P&gt;
&lt;P&gt;This post walks you through what the workshop delivers, what you will build, and why the patterns it teaches matter far beyond the workshop itself.&lt;/P&gt;
&lt;H2&gt;What Is a Microsoft Foundry Hosted Agent?&lt;/H2&gt;
&lt;P&gt;Microsoft Foundry supports two distinct agent types, and understanding the difference is the first decision you will make as an agent developer.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Prompt agents&lt;/STRONG&gt; are lightweight agents backed by a model deployment and a system prompt. No custom code required. Ideal for simple Q&amp;amp;A, summarisation, or chat scenarios where the model's built-in reasoning is sufficient.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Hosted agents&lt;/STRONG&gt; are container-based agents that run &lt;EM&gt;your own code&lt;/EM&gt;&amp;nbsp; .NET, Python, or any framework you choose&amp;nbsp; inside Foundry's managed runtime. You control the logic, the tools, the data access, and the orchestration.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;When your scenario requires custom tool integrations, deterministic business logic, multi-step workflow orchestration, or private API access, a hosted agent is the right choice. The Foundry runtime handles the managed infrastructure; you own the code.&lt;/P&gt;
&lt;P&gt;For the official deployment reference, see &lt;A href="https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/deploy-hosted-agent" target="_blank" rel="noopener"&gt;Deploy a hosted agent to Foundry Agent Service&lt;/A&gt; on Microsoft Learn.&lt;/P&gt;
&lt;HR /&gt;
&lt;H2&gt;What the Workshop Delivers&lt;/H2&gt;
&lt;P&gt;The &lt;A href="https://github.com/microsoft/Hosted_Agents_Workshop_dotNET" target="_blank" rel="noopener"&gt;Hosted Agents Workshop for .NET&lt;/A&gt; is a beginner-friendly, hands-on workshop that takes you through the full development and deployment path for a real hosted agent. It is structured around a concrete scenario: a &lt;STRONG&gt;Hosted Agent Readiness Coach&lt;/STRONG&gt; that helps delivery teams answer questions like:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Should this use case start as a prompt agent or a hosted agent?&lt;/LI&gt;
&lt;LI&gt;What should a pilot launch checklist include?&lt;/LI&gt;
&lt;LI&gt;How should a team troubleshoot common early setup problems?&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The scenario is purposefully practical. It is not a toy chatbot. It is the kind of tool a real team would build and hand to other engineers, which means it needs to be testable, deployable, and extensible.&lt;/P&gt;
&lt;P&gt;The workshop covers:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Local development and validation with .NET 10&lt;/LI&gt;
&lt;LI&gt;Copilot-assisted coding with repo-specific instructions&lt;/LI&gt;
&lt;LI&gt;Deterministic tool implementation with xUnit test coverage&lt;/LI&gt;
&lt;LI&gt;CI pipeline validation with GitHub Actions&lt;/LI&gt;
&lt;LI&gt;Secure deployment to Azure Container Registry and Microsoft Foundry&lt;/LI&gt;
&lt;LI&gt;Chat UI integration using Blazor&lt;/LI&gt;
&lt;/UL&gt;
&lt;HR /&gt;
&lt;H2&gt;What You Will Build&lt;/H2&gt;
&lt;P&gt;By the end of the workshop, you will have a code-based hosted agent that exposes an OpenAI Responses-compatible &lt;CODE&gt;/responses&lt;/CODE&gt; endpoint on port &lt;CODE&gt;8088&lt;/CODE&gt;.&lt;/P&gt;
&lt;P&gt;The agent is backed by three deterministic local tools, implemented in &lt;CODE&gt;WorkshopLab.Core&lt;/CODE&gt;:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;RecommendImplementationShape&lt;/STRONG&gt; — analyses a scenario and recommends hosted or prompt agent based on its requirements&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;BuildLaunchChecklist&lt;/STRONG&gt; — generates a pilot launch checklist for a given use case&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;TroubleshootHostedAgent&lt;/STRONG&gt; — returns structured troubleshooting guidance for common setup problems&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;These tools are deterministic by design, no LLM call required to return a result. That choice makes them fast, predictable, and fully testable, which is the right architecture for business logic in a production agent.&lt;/P&gt;
&lt;P&gt;The end-to-end architecture looks like this:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;H2&gt;The Hands-On Journey: Lab by Lab&lt;/H2&gt;
&lt;P&gt;The workshop follows a deliberate &lt;STRONG&gt;build → validate → ship&lt;/STRONG&gt; progression. Each lab has a clear outcome. You do not move forward until the previous checkpoint passes.&lt;/P&gt;
&lt;H3&gt;Lab 0 — Setup and Local Run&lt;/H3&gt;
&lt;P&gt;Open the repo in VS Code or a GitHub Codespace, configure your Microsoft Foundry project endpoint and model deployment name, then run the agent locally. By the end of Lab 0, your agent is listening on &lt;CODE&gt;http://localhost:8088/responses&lt;/CODE&gt; and responding to test requests.&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;dotnet restore
dotnet build
dotnet run --project src/WorkshopLab.AgentHost&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;Test it with a single PowerShell call:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;Invoke-RestMethod -Method Post `
    -Uri "http://localhost:8088/responses" `
    -ContentType "application/json" `
    -Body '{"input":"Should we start with a hosted agent or a prompt agent?"}'&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;&lt;A href="https://github.com/microsoft/Hosted_Agents_Workshop_dotNET/blob/main/labs/lab-0-foundry-setup/lab-0_readme.md" target="_blank" rel="noopener"&gt;Lab 0 instructions →&lt;/A&gt;&lt;/P&gt;
&lt;H3&gt;Lab 1 — Copilot Customisation&lt;/H3&gt;
&lt;P&gt;Configure repo-specific GitHub Copilot instructions so that Copilot understands the hosted-agent patterns used in this project. You will also add a Copilot review skill tailored to hosted agent code reviews. This step means every code suggestion you receive from Copilot is contextualised to the workshop scenario rather than giving generic .NET advice.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/microsoft/Hosted_Agents_Workshop_dotNET/blob/main/labs/lab-1-copilot-config/lab-1_readme.md" target="_blank" rel="noopener"&gt;Lab 1 instructions →&lt;/A&gt;&lt;/P&gt;
&lt;H3&gt;Lab 2 — Tool Implementation&lt;/H3&gt;
&lt;P&gt;Extend one of the deterministic tools in &lt;CODE&gt;WorkshopLab.Core&lt;/CODE&gt; with a real feature change. The suggested change adds a stronger recommendation path to &lt;CODE&gt;RecommendImplementationShape&lt;/CODE&gt; for scenarios that require all three hosted-agent strengths simultaneously.&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;// In RecommendImplementationShape — add before the final return:
if (requiresCode &amp;amp;&amp;amp; requiresTools &amp;amp;&amp;amp; requiresWorkflow)
{
    return string.Join(Environment.NewLine,
    [
        $"Recommended implementation: Hosted agent (full-stack)",
        $"Scenario goal: {goal}",
        "Why: the scenario requires custom code, external tool access, and " +
        "multi-step orchestration — all three hosted-agent strengths.",
        "Suggested next step: start with a code-based hosted agent, register " +
        "local tools for each integration, and add a workflow layer."
    ]);
}&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;You then write an xUnit test to cover it, run &lt;CODE&gt;dotnet test&lt;/CODE&gt;, and validate the change against a live &lt;CODE&gt;/responses&lt;/CODE&gt; call. This is the workshop's most important teaching moment: &lt;STRONG&gt;every tool change is covered by a test before it ships&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/microsoft/Hosted_Agents_Workshop_dotNET/blob/main/labs/lab-2-implementation-shape/lab-2_readme.md" target="_blank" rel="noopener"&gt;Lab 2 instructions →&lt;/A&gt;&lt;/P&gt;
&lt;H3&gt;Lab 3 — CI Validation&lt;/H3&gt;
&lt;P&gt;Wire up a GitHub Actions workflow that builds the solution, runs the test suite, and validates that the agent container builds cleanly. No manual steps — if a change breaks the build or a test, CI catches it before any deployment happens.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/microsoft/Hosted_Agents_Workshop_dotNET/blob/main/labs/lab-3-ci/lab-3_readme.md" target="_blank" rel="noopener"&gt;Lab 3 instructions →&lt;/A&gt;&lt;/P&gt;
&lt;H3&gt;Lab 4 — Deployment to Microsoft Foundry&lt;/H3&gt;
&lt;P&gt;Use the Azure Developer CLI (&lt;CODE&gt;azd&lt;/CODE&gt;) to provision an Azure Container Registry, publish the agent image, and deploy the hosted agent to Microsoft Foundry. The workshop separates provisioning from deployment deliberately: &lt;CODE&gt;azd&lt;/CODE&gt; owns the Azure resources; the Foundry control plane deployment is an explicit, intentional final step that depends on your real project endpoint and &lt;CODE&gt;agent.yaml&lt;/CODE&gt; manifest values.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/microsoft/Hosted_Agents_Workshop_dotNET/blob/main/labs/lab-4-deploy/lab-4_readme.md" target="_blank" rel="noopener"&gt;Lab 4 instructions →&lt;/A&gt;&lt;/P&gt;
&lt;H3&gt;Lab 5 — Chat UI Integration&lt;/H3&gt;
&lt;P&gt;Connect a Blazor chat UI to the deployed hosted agent and validate end-to-end responses. By the end of Lab 5, you have a fully functioning agent accessible through a real UI, calling your deterministic tools via the Foundry control plane.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/microsoft/Hosted_Agents_Workshop_dotNET/blob/main/labs/lab-5-ui/lab-5_readme.md" target="_blank" rel="noopener"&gt;Lab 5 instructions →&lt;/A&gt;&lt;/P&gt;
&lt;HR /&gt;
&lt;H2&gt;Key Concepts to Take Away&lt;/H2&gt;
&lt;P&gt;The workshop teaches concrete patterns that apply well beyond this specific scenario.&lt;/P&gt;
&lt;H3&gt;Code-first agent design&lt;/H3&gt;
&lt;P&gt;Prompt-only agents are fast to build but hard to test and reason about at scale. A hosted agent with code-backed tools gives you something you can unit test, refactor, and version-control like any other software.&lt;/P&gt;
&lt;H3&gt;Deterministic tools and testability&lt;/H3&gt;
&lt;P&gt;The workshop explicitly avoids LLM calls inside tool implementations. Deterministic tools return predictable outputs for a given input, which means you can write fast, reliable unit tests for them. This is the right pattern for business logic. Reserve LLM calls for the reasoning layer, not the execution layer.&lt;/P&gt;
&lt;H3&gt;CI/CD for agent systems&lt;/H3&gt;
&lt;P&gt;AI agents are software. They deserve the same build-test-deploy discipline as any other service. Lab 3 makes this concrete: you cannot ship without passing CI, and CI validates the container as well as the unit tests.&lt;/P&gt;
&lt;H3&gt;Deployment separation&lt;/H3&gt;
&lt;P&gt;The workshop's split between &lt;CODE&gt;azd&lt;/CODE&gt; provisioning and Foundry control-plane deployment is not arbitrary. It reflects the real operational boundary: your Azure resources are long-lived infrastructure; your agent deployment is a lifecycle event tied to your project's specific endpoint and manifest. Keeping them separate reduces accidents and makes rollbacks easier.&lt;/P&gt;
&lt;H3&gt;Observability and the validation mindset&lt;/H3&gt;
&lt;P&gt;Every lab ends with an explicit checkpoint. The culture the workshop builds is: &lt;EM&gt;prove it works before moving on&lt;/EM&gt;. That mindset is more valuable than any specific tool or command in the labs.&lt;/P&gt;
&lt;HR /&gt;
&lt;H2&gt;Why Hosted Agents Are Worth the Investment&lt;/H2&gt;
&lt;P&gt;The managed runtime in Microsoft Foundry removes the infrastructure overhead that makes custom agent deployment painful. You do not manage Kubernetes clusters, configure ingress rules, or handle TLS termination. Foundry handles the hosting; you handle the code.&lt;/P&gt;
&lt;P&gt;This matters most for teams making the transition from demo to production. A prompt agent is an afternoon's work. A hosted agent with proper CI, tested tools, and a deployment pipeline is a week's work done properly once, instead of several weeks of firefighting done poorly repeatedly.&lt;/P&gt;
&lt;P&gt;The Foundry agent lifecycle —&amp;gt; create, update, version, deploy —&amp;gt;also gives you the controls you need to manage agents in a real environment: staged rollouts, rollback capability, and clear separation between agent versions. For the full deployment guide, see &lt;A href="https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/deploy-hosted-agent" target="_blank" rel="noopener"&gt;Deploy a hosted agent to Foundry Agent Service&lt;/A&gt;.&lt;/P&gt;
&lt;HR /&gt;
&lt;H2&gt;From Workshop to Real Project&lt;/H2&gt;
&lt;P&gt;This workshop is not just a learning exercise. The repository structure, the tooling choices, and the CI/CD patterns are a reference implementation.&lt;/P&gt;
&lt;P&gt;The patterns you can lift directly into a production project include:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The &lt;CODE&gt;WorkshopLab.Core&lt;/CODE&gt; / &lt;CODE&gt;WorkshopLab.AgentHost&lt;/CODE&gt; separation between business logic and agent hosting&lt;/LI&gt;
&lt;LI&gt;The &lt;CODE&gt;agent.yaml&lt;/CODE&gt; manifest pattern for declarative Foundry deployment&lt;/LI&gt;
&lt;LI&gt;The GitHub Actions workflow structure for build, test, and container validation&lt;/LI&gt;
&lt;LI&gt;The &lt;CODE&gt;azd&lt;/CODE&gt; + ACR pattern for image publishing without requiring Docker Desktop locally&lt;/LI&gt;
&lt;LI&gt;The Blazor chat UI as a starting point for internal tooling or developer-facing applications&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The scenario, a readiness coach for hosted agents. This is also something teams evaluating Microsoft Foundry will find genuinely useful. It answers exactly the questions that come up when onboarding a new team to the platform.&lt;/P&gt;
&lt;HR /&gt;
&lt;H2&gt;Common Mistakes When Building Hosted Agents&lt;/H2&gt;
&lt;P&gt;Having run workshops and spoken with developer teams building on Foundry, a few patterns come up repeatedly:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Skipping local validation before containerising.&lt;/STRONG&gt; Always validate the &lt;CODE&gt;/responses&lt;/CODE&gt; endpoint locally first. Debugging inside a container is slower and harder than debugging locally.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Putting business logic inside the LLM call.&lt;/STRONG&gt; If the answer to a user query can be determined by code, use code. Reserve the model for reasoning, synthesis, and natural language output.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Treating CI as optional.&lt;/STRONG&gt; Agent code changes break things just like any other code change. If you do not have CI catching regressions, you will ship them.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Conflating provisioning and deployment.&lt;/STRONG&gt; Recreating Azure resources on every deploy is slow and error-prone. Provision once with &lt;CODE&gt;azd&lt;/CODE&gt;; deploy agent versions as needed through the Foundry control plane.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Not having a rollback plan.&lt;/STRONG&gt; The Foundry agent lifecycle supports versioning. Use it. Know how to roll back to a previous version before you deploy to production.&lt;/LI&gt;
&lt;/UL&gt;
&lt;HR /&gt;
&lt;H2&gt;Get Started&lt;/H2&gt;
&lt;P&gt;The workshop is open source, beginner-friendly, and designed to be completed in a single day. You need a .NET 10 SDK, an Azure subscription, access to a Microsoft Foundry project, and a GitHub account.&lt;/P&gt;
&lt;P&gt;Clone the repository, follow the labs in order, and by the end you will have a production-ready reference implementation that your team can extend and adapt for real scenarios.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;A href="https://github.com/microsoft/Hosted_Agents_Workshop_dotNET" target="_blank" rel="noopener"&gt;Clone the workshop repository →&lt;/A&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Here is the quick start to prove the solution works locally before you begin the full lab sequence:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;git clone https://github.com/microsoft/Hosted_Agents_Workshop_dotNET.git
cd Hosted_Agents_Workshop_dotNET

# Set your Foundry project endpoint and model deployment
$env:AZURE_AI_PROJECT_ENDPOINT = "https://&amp;lt;resource&amp;gt;.services.ai.azure.com/api/projects/&amp;lt;project&amp;gt;"
$env:MODEL_DEPLOYMENT_NAME     = "gpt-4.1-mini"

# Build and run
dotnet restore
dotnet build
dotnet run --project src/WorkshopLab.AgentHost&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;Then send your first request:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;Invoke-RestMethod -Method Post `
    -Uri "http://localhost:8088/responses" `
    -ContentType "application/json" `
    -Body '{"input":"Should we start with a hosted agent or a prompt agent?"}'&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;When the agent answers as a Hosted Agent Readiness Coach, you are ready to begin the labs.&lt;/P&gt;
&lt;HR /&gt;
&lt;H2&gt;Key Takeaways&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;Hosted agents in Microsoft Foundry let you run custom .NET code in a managed container runtime — you own the logic, Foundry owns the infrastructure.&lt;/LI&gt;
&lt;LI&gt;Deterministic tools are the right pattern for business logic in production agents: fast, testable, and predictable.&lt;/LI&gt;
&lt;LI&gt;CI/CD is not optional for agent systems. Build it in from the start, not as an afterthought.&lt;/LI&gt;
&lt;LI&gt;Separate your provisioning (&lt;CODE&gt;azd&lt;/CODE&gt;) from your deployment (Foundry control plane) — it reduces accidents and simplifies rollbacks.&lt;/LI&gt;
&lt;LI&gt;The workshop is a reference implementation, not just a tutorial. The patterns are production-grade and ready to adapt.&lt;/LI&gt;
&lt;/UL&gt;
&lt;HR /&gt;
&lt;H2&gt;References&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://github.com/microsoft/Hosted_Agents_Workshop_dotNET" target="_blank" rel="noopener"&gt;Hosted Agents Workshop for .NET — GitHub Repository&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://github.com/microsoft/Hosted_Agents_Workshop_dotNET/blob/main/labs/README.md" target="_blank" rel="noopener"&gt;Workshop Lab Guide&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/deploy-hosted-agent" target="_blank" rel="noopener"&gt;Deploy a Hosted Agent to Foundry Agent Service — Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://ai.azure.com/" target="_blank" rel="noopener"&gt;Microsoft Foundry Portal&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/developer/azure-developer-cli/overview" target="_blank" rel="noopener"&gt;Azure Developer CLI (azd) — Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://dotnet.microsoft.com/en-us/download/dotnet/10.0" target="_blank" rel="noopener"&gt;.NET 10 SDK Download&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Wed, 22 Apr 2026 17:30:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/educator-developer-blog/from-demo-to-production-building-microsoft-foundry-hosted-agents/ba-p/4513718</guid>
      <dc:creator>Lee_Stott</dc:creator>
      <dc:date>2026-04-22T17:30:00Z</dc:date>
    </item>
    <item>
      <title>Building an Auditable Security Layer for Agentic AI</title>
      <link>https://techcommunity.microsoft.com/t5/educator-developer-blog/building-an-auditable-security-layer-for-agentic-ai/ba-p/4495753</link>
      <description>&lt;P data-start="0" data-end="46"&gt;Most agent failures do not look like breaches.&lt;/P&gt;
&lt;P data-start="48" data-end="224"&gt;They look like a normal chat, a normal answer, and a normal tool call. Until the next morning, when a single question collapses the whole story: who authorized that action.&lt;/P&gt;
&lt;P data-start="226" data-end="347"&gt;You think you deployed an agent. In reality, you deployed an unbounded automation pipeline that happens to speak English.&lt;/P&gt;
&lt;P data-start="420" data-end="762"&gt;I’m &lt;STRONG&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/drhazemali" target="_blank" rel="noopener"&gt;Hazem Ali&lt;/A&gt; &lt;/STRONG&gt;— &lt;A class="lia-external-url" href="https://mvp.microsoft.com/en-US/MVP/profile/4865c7ae-cb5b-4eb5-b128-608b1f9a6ebc" target="_blank" rel="noopener"&gt;Microsoft AI MVP&lt;/A&gt;, Distinguished AI &amp;amp; ML Architect, Founder &amp;amp; CEO at Skytells. For over 20 years, I’ve built secure, scalable enterprise AI across cloud and edge, with a focus on agent security and sovereign, governed AI architectures. My work on these systems is widely referenced by practitioners across multiple regions.&lt;/P&gt;
&lt;img&gt;
&lt;P data-start="321" data-end="698"&gt;&lt;STRONG data-start="0" data-end="24"&gt;Hazem Ali&lt;/STRONG&gt; honored to receive an official speaker invitation under the patronage of H.H. Sheikh Dr. Sultan bin Muhammad Al Qasimi, Member of the UAE Supreme Council and Ruler of Sharjah, to speak at the Sharjah International Conference on Linguistic Intelligence (SICLI), organized by the American University of Sharjah (AUS) and the Emirates Scholar Center for Research and Studies.&lt;/P&gt;
&lt;/img&gt;
&lt;P data-start="420" data-end="762"&gt;This piece is a collaboration with &lt;A class="lia-external-url" href="http://linkedin.com/in/ACoAAAXxehwBKIx99wbwTikXEjLGWGwqbpEkmYc" target="_blank" rel="noopener"&gt;&lt;STRONG data-start="384" data-end="399"&gt;Hammad Atta&lt;/STRONG&gt;&lt;/A&gt; a Practice Lead – AI Security &amp;amp; Cloud Strategy and Dr. Yasir Mehmood , Dr Muhammad Zeeshan Baig, Dr. Muhammad Aatif, Dr. MUHAMMAD AZIZ UL HAQ. We align on one core idea: agent security is not about making the model behave. It is about building enforceable boundaries around the model and proving every privileged step.&lt;/P&gt;
&lt;P data-start="764" data-end="1019"&gt;This article is meant to sit next to my earlier Tech Community piece, &lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/educatordeveloperblog/zero-trust-agent-architecture-how-to-actually-secure-your-agents/4473995" data-lia-auto-title="Zero-Trust Agent Architecture: How To Actually Secure Your Agents" data-lia-auto-title-active="0" target="_blank"&gt;&lt;STRONG data-start="834" data-end="903"&gt;Zero-Trust Agent Architecture: How To Actually Secure Your Agents&lt;/STRONG&gt;&lt;/A&gt;, and go one level deeper into the mechanics you can implement on Azure today.&lt;/P&gt;
&lt;P data-start="1021" data-end="1042"&gt;Let me break it down.&lt;/P&gt;
&lt;H2 data-start="0" data-end="48"&gt;The Principle: The model is not your boundary&lt;/H2&gt;
&lt;P data-start="50" data-end="116"&gt;Let me break it down in the way I’d explain it in a design review.&lt;/P&gt;
&lt;P data-start="118" data-end="575"&gt;A boundary is something that still holds when the component on the other side is adversarial, confused, or simply wrong. An LLM is none of those reliably. In an agent, the model is not just a generator. It becomes a &lt;STRONG data-start="334" data-end="359"&gt;planner and scheduler&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P data-start="118" data-end="575"&gt;It decides when to retrieve, which tool to call, how to shape arguments, and when to loop.&lt;/P&gt;
&lt;P data-start="118" data-end="575"&gt;That means your real attack surface is not “bad output.” It is the &lt;STRONG data-start="519" data-end="541"&gt;control-flow graph&lt;/STRONG&gt; the model is allowed to traverse.&lt;/P&gt;
&lt;P data-start="577" data-end="745"&gt;So if your “security” lives inside the prompt, you are putting policy in the same token stream the attacker can influence. That is not a boundary. That is a suggestion.&lt;/P&gt;
&lt;P data-start="747" data-end="853"&gt;The only stable design is to treat the model like an untrusted proposer and the runtime like the verifier.&lt;/P&gt;
&lt;P data-start="855" data-end="941"&gt;Here is the chain I use. Each gate is external to the model and survives manipulation.&lt;/P&gt;
&lt;UL data-start="943" data-end="1422"&gt;
&lt;LI data-start="943" data-end="1045"&gt;&lt;STRONG data-start="945" data-end="961"&gt;Context Gate&lt;/STRONG&gt;: Everything that enters the model is treated as executable influence, not “text.”&lt;/LI&gt;
&lt;LI data-start="1046" data-end="1147"&gt;&lt;STRONG data-start="1048" data-end="1067"&gt;Capability Gate&lt;/STRONG&gt;: Tools are invoked as constrained capabilities, not free-form function calls.&lt;/LI&gt;
&lt;LI data-start="1148" data-end="1237"&gt;&lt;STRONG data-start="1150" data-end="1167"&gt;Evidence Gate&lt;/STRONG&gt;: Every privileged step produces a verifiable artifact, not a story.&lt;/LI&gt;
&lt;LI data-start="1238" data-end="1351"&gt;&lt;STRONG data-start="1240" data-end="1267"&gt;Retrieval Control Plane&lt;/STRONG&gt;: What the agent can see is governed by labels and identity, not prompt etiquette.&lt;/LI&gt;
&lt;LI data-start="1352" data-end="1422"&gt;&lt;STRONG data-start="1354" data-end="1373"&gt;Detection Layer&lt;/STRONG&gt;: Drift and probing become alerts, not surprises.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img&gt;Figure: Model proposes. Runtime verifies. Input + retrieved context → shields → model → tool gateway → signed intent → governed retrieval → SOC telemetry.&lt;/img&gt;
&lt;P data-start="1622" data-end="1898"&gt;Now the rare part, the part most people miss: the boundary is not “block or allow.” The boundary is &lt;STRONG data-start="1722" data-end="1734"&gt;stateful&lt;/STRONG&gt;. Once the runtime sees a suspicious signal, the entire session must transition into a degraded capability state, and every downstream gate must enforce that state.&lt;/P&gt;
&lt;H4 data-start="1900" data-end="1958"&gt;1. Treat context as executable influence, and preserve provenance&lt;/H4&gt;
&lt;P data-start="2790" data-end="2958"&gt;If you do RAG, your documents are not “supporting info.” They are an input channel. That makes the biggest prompt-injection risk &lt;STRONG data-start="2919" data-end="2935"&gt;not the user&lt;/STRONG&gt;. It is your documents.&lt;/P&gt;
&lt;P data-start="2960" data-end="3253"&gt;Microsoft’s Prompt Shields covers user prompt attacks (scanned at the user input intervention point) and document attacks (scanned at the user input and tool response intervention points). When enabled, each request returns annotation results with detected and filtered values that your runtime can translate into a policy decision: &lt;SPAN class="lia-text-color-14"&gt;block, degrade, or allow.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H5 data-start="3255" data-end="3318"&gt;Provenance Collapse.&lt;/H5&gt;
&lt;P data-start="3320" data-end="3580"&gt;Most teams concatenate prompt + policy + retrieved chunks into one blob. The moment you do that, you lose the one thing you need for a defensible boundary: you can no longer reliably tell which tokens came from where. That is how “context” becomes “authority.”&lt;/P&gt;
&lt;P data-start="3582" data-end="3938"&gt;For indirect/document attacks,&amp;nbsp;&lt;/P&gt;
&lt;P data-start="3582" data-end="3938"&gt;Microsoft guidance recommends delimiting context documents inside the prompt using &lt;SPAN class="lia-text-color-14"&gt;"""&lt;/SPAN&gt;&lt;EM&gt;&lt;SPAN class="lia-text-color-14"&gt;&amp;lt;documents&amp;gt; ... &amp;lt;/documents&amp;gt;""" &lt;/SPAN&gt;&lt;/EM&gt;to improve indirect attack detection.&lt;/P&gt;
&lt;P data-start="3582" data-end="3938"&gt;That delimiter is not formatting. It is a provenance marker that improves indirect attack detection through Prompt Shields.&lt;/P&gt;
&lt;P data-start="4164" data-end="4191"&gt;Minimal, practical pattern:&lt;/P&gt;
&lt;LI-CODE lang="typescript"&gt;// Provenance-preserving prompt construction for indirect/document attack detection
function buildPrompt(system: string, user: string, retrievedDocs: string[]): string {
  const docs = retrievedDocs.map((d) =&amp;gt; `- ${d}`).join("\n");

  return [
    system,
    "",
    `User: ${user}`,
    "",
    `""" &amp;lt;documents&amp;gt;\n${docs}\n&amp;lt;/documents&amp;gt; """`,
  ].join("\n");
}
&lt;/LI-CODE&gt;
&lt;P data-start="4551" data-end="4630"&gt;Then treat Prompt Shields output as a &lt;STRONG data-start="4589" data-end="4615"&gt;session security event&lt;/STRONG&gt;, not a banner:&lt;/P&gt;
&lt;LI-CODE lang="typescript"&gt;type RiskState = "NORMAL" | "SUSPECT" | "BLOCK";
type FilterPolicy = "BLOCK_ON_FILTERED" | "DEGRADE_ON_FILTERED";

function computeRiskState(
  shields: { detected: boolean; filtered?: boolean },
  labels: string[],
  policy: FilterPolicy = "DEGRADE_ON_FILTERED",
): RiskState {
  // detected =&amp;gt; hard stop
  if (shields.detected) return "BLOCK";

  // filtered is an annotation signal: block or degrade by policy
  if (shields.filtered) {
    return policy === "BLOCK_ON_FILTERED" ? "BLOCK" : "SUSPECT";
  }

  // example: sensitivity-based degradation independent of shield hits
  const sensitive = labels.some((l) =&amp;gt;
    ["Confidential", "HighlyConfidential", "Regulated"].includes(l),
  );

  return sensitive ? "SUSPECT" : "NORMAL";
}
&lt;/LI-CODE&gt;
&lt;P data-start="5004" data-end="5117"&gt;When the signal is clear, you block and log. When it is suspicious, you do not warn. You &lt;STRONG data-start="5093" data-end="5116"&gt;downgrade authority&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H5&gt;QSAF Alignment:&lt;/H5&gt;
&lt;P&gt;&lt;STRONG&gt;Prompt Injection Protection (Domain 1): &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;QSAF-PI-001 (static pattern blacklist), QSAF-PI-002 (dynamic LLM analysis), QSAF-PI-003 (semantic embedding comparison)&lt;/P&gt;
&lt;P&gt;All addressed by Prompt Shields and provenance marking.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Context Manipulation (Domain 2):&lt;/STRONG&gt; QSAF-RC-004 (context drift), QSAF-RC-007 (nested prompt injection) – mitigated by stateful risk calculation.&lt;/P&gt;
&lt;H4 data-start="2858" data-end="2919"&gt;2. Tools are capabilities with constraints, not functions&lt;/H4&gt;
&lt;P data-start="2920" data-end="3065"&gt;When the model proposes a tool call, your runtime should re-derive what is allowed from identity plus risk state, then enforce it at the gateway.&lt;/P&gt;
&lt;LI-CODE lang="typescript"&gt;type ToolRequest = {
  tool: string;
  args: unknown;
};

type Capabilities = {
  allowWrite: boolean;
  allowedTools: Set&amp;lt;string&amp;gt;;
};

function deriveCapabilities(risk: RiskState, roles: string[]): Capabilities {
  const baseAllowed = new Set(["search_kb", "get_profile", "summarize"]);
  const isAdmin = roles.includes("Admin");

  if (risk === "SUSPECT") {
    return { allowWrite: false, allowedTools: baseAllowed };
  }

  if (risk === "BLOCK") {
    return { allowWrite: false, allowedTools: new Set() };
  }

  // NORMAL
  const tools = new Set([
    ...baseAllowed,
    ...(isAdmin ? ["update_record", "issue_refund"] : []),
  ]);

  return { allowWrite: isAdmin, allowedTools: tools };
}

function authorizeTool(req: ToolRequest, caps: Capabilities): void {
  if (!caps.allowedTools.has(req.tool)) throw new Error("ToolNotAllowed");
  if (!caps.allowWrite &amp;amp;&amp;amp; req.tool.startsWith("update_")) {
    throw new Error("WriteDenied");
  }
}
&lt;/LI-CODE&gt;
&lt;P data-start="3950" data-end="4003"&gt;The model can ask. It cannot grant itself permission.&lt;/P&gt;
&lt;H5&gt;QSAF Alignment:&lt;/H5&gt;
&lt;P&gt;&lt;STRONG&gt;Plugin Abuse Monitoring (Domain 3): &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;QSAF-PL-001 (whitelist enforcement), QSAF-PL-003 (restrict sensitive plugins), QSAF-PL-006 (rate‑limiting) – implemented via capability derivation and gateway policies.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Behavioral Anomaly Detection (Domain 5): &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;QSAF-BA-006 (plugin execution pattern deviance) – detected by comparing actual calls against derived capabilities.&lt;/P&gt;
&lt;H3 data-start="1036" data-end="1104"&gt;The Integrity Gate: Hash-chain the authority, not the output&lt;/H3&gt;
&lt;P data-start="1106" data-end="1158"&gt;Let me add the part that makes investigations clean.&lt;/P&gt;
&lt;P data-start="1160" data-end="1265"&gt;Most teams treat integrity like an audit log problem. That is not enough. Logs explain. Integrity proves.&lt;/P&gt;
&lt;P data-start="1267" data-end="1551"&gt;The hard truth is that agent authority is assembled out of pieces: the system instruction, the user prompt, retrieved chunks, risk annotations, and finally the tool intent. If you do not bind those pieces together cryptographically, an incident review becomes a story-telling session.&lt;/P&gt;
&lt;P data-start="1553" data-end="1790"&gt;This is why &lt;A class="lia-external-url" href="https://drhazemali.com/blog/qsaf-qorvex-security-ai-framework" target="_blank" rel="noopener"&gt;QSAF&lt;/A&gt; has an entire domain for &lt;STRONG data-start="1595" data-end="1628"&gt;payload integrity and signing&lt;/STRONG&gt;, including prompt hash signing, nonce or replay protection, and a &lt;STRONG data-start="1695" data-end="1717"&gt;hash chain lineage&lt;/STRONG&gt; that tracks how a session evolved.&lt;/P&gt;
&lt;P data-start="1792" data-end="1861"&gt;Here is how you can map that into the runtime verifies.&lt;/P&gt;
&lt;img /&gt;
&lt;P data-start="1863" data-end="1959"&gt;You build a canonical “authority envelope” for every privileged hop, compute a digest, and then:&lt;/P&gt;
&lt;UL data-start="1961" data-end="2180"&gt;
&lt;LI data-start="1961" data-end="2003"&gt;link it to the previous hop (hash chain)&lt;/LI&gt;
&lt;LI data-start="2004" data-end="2038"&gt;include a nonce (replay control)&lt;/LI&gt;
&lt;LI data-start="2039" data-end="2180"&gt;sign the digest with Azure Key Vault (Key Vault signs digests, it does not hash your content for you)&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="typescript"&gt;import crypto from "crypto";

type AuthorityEnvelope = {
  sessionId: string;
  turnId: number;
  policyVersion: string;

  // provenance-preserved components
  systemHash: string;
  userHash: string;
  documentsHash: string; // hash of structured retrieved chunks (not just rendered text)

  shields: {
    detected: boolean;
    filtered: boolean;
  };

  riskState: "NORMAL" | "SUSPECT" | "BLOCK";

  // proposed action (if any)
  tool?: {
    name: string;
    argsHash: string;
  };

  // anti-replay + lineage
  nonce: string;
  prevDigest?: string;
  ts: string;
};

function sha256(bytes: string): string {
  return crypto.createHash("sha256").update(bytes).digest("hex");
}

// Canonicalization matters. JSON.stringify is OK if you control key order.
// For cross-language, use RFC 8785 (JCS) canonical JSON.
function canonicalJson(x: unknown): string {
  return JSON.stringify(x);
}

function buildEnvelope(
  input: Omit&amp;lt;AuthorityEnvelope, "nonce" | "ts"&amp;gt;,
): AuthorityEnvelope {
  return {
    ...input,
    nonce: crypto.randomUUID(),
    ts: new Date().toISOString(),
  };
}

function digestEnvelope(env: AuthorityEnvelope): string {
  return sha256(canonicalJson(env));
}&lt;/LI-CODE&gt;
&lt;P data-start="3565" data-end="3710"&gt;Then you call Key Vault to sign &lt;STRONG data-start="3597" data-end="3612"&gt;that digest&lt;/STRONG&gt; (REST sign), and optionally verify later (REST verify).&lt;/P&gt;
&lt;P data-start="3712" data-end="3780"&gt;The rare failure mode this blocks is subtle: &lt;STRONG data-start="3757" data-end="3779"&gt;authority splicing&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P data-start="3782" data-end="4081"&gt;Without a hash chain, it is possible for the runtime to correctly validate a tool call, but later be unable to prove which retrieved chunk, which Prompt Shields result, and which policy version were in force when that call was authorized. With the chain, every privileged hop becomes tamper-evident.&lt;/P&gt;
&lt;P data-start="4083" data-end="4351"&gt;This is the point: Prompt Shields tells you “this looks dangerous.” Document delimiters preserve provenance. &lt;BR data-start="4229" data-end="4232" /&gt;The integrity gate makes the runtime able to say, later, with evidence: “This is exactly what I accepted as authority.”&lt;/P&gt;
&lt;H5&gt;QSAF Alignment:&lt;/H5&gt;
&lt;P&gt;Payload Integrity &amp;amp; Signing (Domain 6): QSAF-PY-001 (prompt hash signing), QSAF-PY-005 (nonce/replay control), QSAF-PY-006 (hash chain lineage) – directly implemented via the envelope and chaining.&lt;/P&gt;
&lt;H2 data-start="3489" data-end="3541"&gt;Tools must sit behind a wall that can say “no”&lt;/H2&gt;
&lt;P data-start="5178" data-end="5391"&gt;Tool calls are where language becomes authority. If an agent can call APIs that mutate state, your security story is not about the response text. It is about whether the tool call is allowed under explicit policy.&lt;/P&gt;
&lt;P data-start="5393" data-end="5700"&gt;This is exactly where &lt;STRONG data-start="5415" data-end="5439"&gt;Azure API Management&lt;/STRONG&gt; belongs: as the tool gateway that enforces authentication and authorization before any tool request reaches your backend. The validate-jwt policy is the canonical enforcement mechanism for validating JWTs at the gateway.&lt;/P&gt;
&lt;P data-start="5702" data-end="5728"&gt;The design goal is simple:&lt;/P&gt;
&lt;P data-start="5730" data-end="5804"&gt;The model can request a tool call. The gateway decides if it is permitted.&lt;/P&gt;
&lt;P data-start="5806" data-end="5849"&gt;A capability token approach keeps it clean:&lt;/P&gt;
&lt;LI-CODE lang="xml"&gt;&amp;lt;!-- APIM inbound policy sketch --&amp;gt;
&amp;lt;validate-jwt header-name="Authorization" failed-validation-httpcode="401"&amp;gt;
  &amp;lt;required-claims&amp;gt;
    &amp;lt;claim name="scp"&amp;gt;
      &amp;lt;value&amp;gt;tools.read&amp;lt;/value&amp;gt;
    &amp;lt;/claim&amp;gt;
  &amp;lt;/required-claims&amp;gt;
&amp;lt;/validate-jwt&amp;gt;&lt;/LI-CODE&gt;
&lt;P data-start="6100" data-end="6265"&gt;The claim name (scp, roles, or custom claims) depends on your token issuer; the point is enforcing authorization at the gateway, not inside model text.&lt;/P&gt;
&lt;P data-start="6100" data-end="6265"&gt;Now you can enforce “read-only mode” by issuing tokens that simply do not carry write scopes. The model can try to call a write tool. It still gets denied by policy.&lt;/P&gt;
&lt;H2 data-start="6272" data-end="6327"&gt;Evidence is not logs. Evidence is a signed chain.&lt;/H2&gt;
&lt;P data-start="6329" data-end="6375"&gt;Logs help you debug. Evidence helps you prove.&lt;/P&gt;
&lt;P data-start="6377" data-end="6763"&gt;So you hash the session envelope and the tool intent, then sign the digest using &lt;STRONG data-start="6458" data-end="6482"&gt;Azure Key Vault Keys&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P data-start="6377" data-end="6763"&gt;Key Vault sign creates a signature from a digest, and verify verifies a signature against a digest. Key Vault does not hash your content for you. Hash locally, then sign the digest.), and Key Vault documentation is explicit that signing is &lt;EM data-start="6628" data-end="6639"&gt;sign-hash&lt;/EM&gt;, not “sign arbitrary content.” You hash locally, then ask Key Vault to sign the hash.&lt;/P&gt;
&lt;LI-CODE lang="typescript"&gt;import crypto from "crypto";

const sha256 = (x: unknown): string =&amp;gt;
  crypto.createHash("sha256").update(JSON.stringify(x)).digest("hex");

type IntentEnvelope = {
  sessionId: string;
  userId: string;
  promptHash: string;
  documentsHash: string;
  tool: string;
  argsHash: string;
  nonce: string;
  ts: string;
  policyVersion: string;
};

function buildIntent(
  sessionId: string,
  userId: string,
  prompt: string,
  docs: unknown,
  tool: string,
  args: unknown,
  policyVersion: string,
): IntentEnvelope {
  return {
    sessionId,
    userId,
    promptHash: sha256(prompt),
    documentsHash: sha256(docs),
    tool,
    argsHash: sha256(args),
    nonce: crypto.randomUUID(),
    ts: new Date().toISOString(),
    policyVersion,
  };
}&lt;/LI-CODE&gt;
&lt;P data-start="7717" data-end="7785"&gt;Once you do this, your system stops “explaining.” It starts proving.&lt;/P&gt;
&lt;H2 data-start="6054" data-end="6115"&gt;Govern what the agent can see, not only what it can say&lt;/H2&gt;
&lt;P data-start="6117" data-end="6183"&gt;RAG without governance eventually becomes a data exposure feature.&lt;/P&gt;
&lt;P data-start="6185" data-end="6496"&gt;This is why I treat retrieval as a governed operation. &lt;STRONG data-start="6240" data-end="6280"&gt;Microsoft Purview sensitivity labels&lt;/STRONG&gt; give you a practical way to classify content and build retrieval rules on top of that classification. Microsoft documents creating and configuring sensitivity labels in Purview.&lt;/P&gt;
&lt;P data-start="6498" data-end="6520"&gt;The pattern is simple:&lt;/P&gt;
&lt;UL data-start="6522" data-end="6704"&gt;
&lt;LI data-start="6522" data-end="6541"&gt;Label the corpus.&lt;/LI&gt;
&lt;LI data-start="6542" data-end="6590"&gt;Filter retrieval by label and identity policy.&lt;/LI&gt;
&lt;LI data-start="6591" data-end="6631"&gt;Log label distribution per completion.&lt;/LI&gt;
&lt;LI data-start="6632" data-end="6704"&gt;Alert when a low-privilege identity retrieves high-sensitivity labels.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P data-start="6886" data-end="6968"&gt;This is how you keep sovereignty real. Not in a slide deck. In the retrieval path.&lt;/P&gt;
&lt;H2 data-start="8646" data-end="8708"&gt;Operate it like a security system: posture and detection&lt;/H2&gt;
&lt;P data-start="8710" data-end="8833"&gt;Inline gates reduce risk. They do not eliminate it. Systems drift. People add tools. Policies get loosened. Attacks evolve.&lt;/P&gt;
&lt;P data-start="8835" data-end="9089"&gt;Microsoft Defender for Cloud’s Defender CSPM plan includes AI security posture management for generative AI apps and AI agents (Preview), including discovery/inventory of AI agents deployed with Azure AI Foundry.&lt;/P&gt;
&lt;P data-start="9091" data-end="9235"&gt;Then you use &lt;STRONG data-start="9104" data-end="9126"&gt;Microsoft Sentinel&lt;/STRONG&gt; to turn your telemetry into incidents, with scheduled analytics rules.&lt;/P&gt;
&lt;P data-start="9237" data-end="9286"&gt;Your detections should match the gates you built:&lt;/P&gt;
&lt;UL data-start="9288" data-end="9684"&gt;
&lt;LI data-start="9288" data-end="9397"&gt;Repeated Prompt Shields detections from the same identity or session.&lt;/LI&gt;
&lt;LI data-start="9398" data-end="9452"&gt;Tool-call spikes after a suspicious document signal.&lt;/LI&gt;
&lt;LI data-start="9453" data-end="9560"&gt;APIM denials for write endpoints from sessions in read-only mode.&lt;/LI&gt;
&lt;LI data-start="9561" data-end="9684"&gt;High-sensitivity label retrieval by identities that should never touch that tier.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H5&gt;QSAF Alignment:&lt;/H5&gt;
&lt;P&gt;Behavioral Anomaly Detection (Domain 5):&lt;/P&gt;
&lt;P&gt;QSAF-BA-001 (session entropy), QSAF-BA-004 (repeated intent mutation), QSAF-BA-007 (unified risk score) – detected via Sentinel rules.&lt;/P&gt;
&lt;P&gt;Cross‑Environment Defense (Domain 9): QSAF-CE-006 (coordinated alert response) – using Sentinel incidents and playbooks.&lt;/P&gt;
&lt;H2 data-start="9881" data-end="9927"&gt;Where the reference checklist fits, quietly&lt;/H2&gt;
&lt;P data-start="9929" data-end="10238"&gt;Behind the scenes, we use a control checklist lens to ensure we cover prompt/context attacks, tool misuse, integrity, governance, and operational monitoring.&amp;nbsp;The point is not to rename Microsoft features into framework terms. The point is to make the system enforceable and auditable using Azure-native gates.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 data-start="10245" data-end="10255"&gt;Closing&lt;/H2&gt;
&lt;P data-start="10257" data-end="10310"&gt;Zero trust for agents is not a slogan. It is a build.&lt;/P&gt;
&lt;P data-start="10312" data-end="10928"&gt;Prompt Shields gives you a front gate for both user prompt attacks and document attacks, with clear annotations like detected and filtered. &lt;BR data-start="10495" data-end="10498" /&gt;API Management gives you a tool boundary that can say “no” regardless of what the model tries, using validate-jwt. &lt;BR data-start="10654" data-end="10657" /&gt;Signed intent gives you evidence, using Key Vault’s sign-hash semantics. &lt;BR data-start="10769" data-end="10772" /&gt;Purview labels give you governed retrieval. Sentinel and Defender give you an operating model, not wishful thinking.&lt;/P&gt;
&lt;P data-start="10930" data-end="11150"&gt;If you want the conceptual spine and the architectural principles that frame this pipeline, start with my earlier Tech Community pieces, then come back here and implement the gates.&lt;/P&gt;
&lt;P data-start="10930" data-end="11150"&gt;Thanks for reading&lt;/P&gt;
&lt;P data-start="10930" data-end="11150"&gt;— Hazem Ali&lt;/P&gt;</description>
      <pubDate>Wed, 22 Apr 2026 08:06:05 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/educator-developer-blog/building-an-auditable-security-layer-for-agentic-ai/ba-p/4495753</guid>
      <dc:creator>hazem</dc:creator>
      <dc:date>2026-04-22T08:06:05Z</dc:date>
    </item>
    <item>
      <title>Prompt Engineering for Spec-Driven Development with SpecKit</title>
      <link>https://techcommunity.microsoft.com/t5/educator-developer-blog/prompt-engineering-for-spec-driven-development-with-speckit/ba-p/4512622</link>
      <description>&lt;H2&gt;Introduction&lt;/H2&gt;
&lt;P&gt;Charlotte Yeo, UCL MEng Computer Science &lt;A href="https://www.linkedin.com/in/charlotte-yeo-627476294/" target="_blank" rel="noopener"&gt;https://www.linkedin.com/in/charlotte-yeo-627476294/&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Supervisors: Janaina Mourao-Miranda (UCL) and Lee Stott (Microsoft).&lt;/P&gt;
&lt;P&gt;For my final-year MEng project at UCL, I investigated how to get the best results out of&lt;A class="lia-external-url" href="https://speckit.org" target="_blank"&gt; SpecKit&lt;/A&gt;, a spec-driven AI development framework, by systematically testing different prompt strategies. &lt;BR /&gt;&lt;BR /&gt;Here's what I found.&lt;/P&gt;
&lt;H2&gt;Project Overview&lt;/H2&gt;
&lt;P&gt;LLMs are powerful coding assistants, but they struggle to maintain context over long development sessions, leading to hallucinations and inconsistent outputs. SpecKit addresses this by using persistent, structured specification documents as memory throughout the development process. The developer writes a natural language spec; SpecKit builds the software from it.&lt;/P&gt;
&lt;P&gt;The problem is that no one has established best practices for writing those specs. This project aimed to fill that gap.&lt;/P&gt;
&lt;H2&gt;Experiments&lt;/H2&gt;
&lt;P&gt;I ran 10 experiments, each using SpecKit to build the same target system, a multi-agent AI code verification tool, from a different prompt formulation. The variables I tested included prompt authority, format, level of detail, and output format. By keeping the target software constant, the effect of each prompt change on SpecKit's performance is isolated.&lt;/P&gt;
&lt;P&gt;The target system itself used &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/agent-framework/" target="_blank"&gt;Microsoft Agent Framework&lt;/A&gt;, &lt;A class="lia-external-url" href="https://learn.microsoft.com/azure/cosmos-db/gen-ai/rag" target="_blank"&gt;Azure Cosmos DB for RAG&lt;/A&gt;, and&lt;A class="lia-external-url" href="https://ai.azure.com" target="_blank"&gt; Microsoft Foundry&lt;/A&gt; to access &lt;A class="lia-external-url" href="https://azure.microsoft.com/en-us/blog/introducing-gpt-5-2-in-microsoft-foundry-the-new-standard-for-enterprise-ai/" target="_blank"&gt;GPT-5.2&lt;/A&gt;, all orchestrated via a Python codebase. This covered a wide range of real-world engineering challenges: multi-agent coordination, cloud service integration, and working with a library new enough that the model hadn't been trained on it.&lt;/P&gt;
&lt;H2&gt;Technical Details&lt;/H2&gt;
&lt;P&gt;SpecKit runs as a series of commands inside GitHub Copilot in VS Code, powered here by Claude Sonnet 4.5. The workflow moves through seven stages: /constitution → /specify → /clarify → /plan → /tasks → /analyze → /implement. At each stage, SpecKit writes and updates Markdown files that serve as persistent memory, so the session can be paused and resumed without losing context.&lt;/P&gt;
&lt;P&gt;Key tools used:&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-level="1"&gt;Microsoft Agent Framework — agent orchestration&lt;/LI&gt;
&lt;LI aria-level="1"&gt;Microsoft Foundry — access to LLMs (GPT-5.2, Text Embedding 3)&lt;/LI&gt;
&lt;LI aria-level="1"&gt;Azure Cosmos DB — code example database for RAG&lt;/LI&gt;
&lt;LI aria-level="1"&gt;Claude Sonnet 4.5 — model powering SpecKit via &lt;A class="lia-external-url" href="https://github.com/features/copilot" target="_blank"&gt;GitHub Copilot&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Results&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;These were the key findings:&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-level="1"&gt;Natural language outperforms machine-readable formats. The JSON prompt (Case 1) took 40% longer and generated significantly more issues than the natural language control.&lt;/LI&gt;
&lt;LI aria-level="1"&gt;Authority is necessary. Removing the authoritative framing from the prompt (Case 3) caused SpecKit to treat specifications as optional, resulting in the multi-agent system not being built at all until manually corrected. Total time: 4h 53m vs. 2h 24m for the control.&lt;/LI&gt;
&lt;LI aria-level="1"&gt;Omit what the model already knows. Removing the scoring rubrics (Case 8) saved 34 minutes with no loss in output quality as the model inferred the rubric from context. However, omitting the Cosmos DB schema or agent architecture descriptions caused major implementation errors.&lt;/LI&gt;
&lt;LI aria-level="1"&gt;The model must be able to read its own outputs. Changing the output to PDF (Case 9), which Claude Sonnet 4.5 cannot read in Copilot, caused the implementation stage to increase significantly to 7h 38m, with 33 required interventions, because the model couldn't verify whether its code was working.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Best Practices Found&lt;/H2&gt;
&lt;P&gt;The biggest insight is that prompt design has as much impact on SpecKit's performance as prompt content. A complete specification written non-authoritatively or in JSON will produce worse results than a slightly shorter specification written in clear, authoritative natural language.&lt;/P&gt;
&lt;P&gt;There is also a trade-off between token count and manual intervention. Shorter prompts are faster, but only when the omitted information is something the model can reliably infer. Leaving out details about unique libraries or architectures will result in higher debugging times later.&lt;/P&gt;
&lt;H2&gt;Future Development&lt;/H2&gt;
&lt;P&gt;These are directions for future work in this area:&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-level="1"&gt;Running each experiment multiple times to account for model non-determinism&lt;/LI&gt;
&lt;LI aria-level="1"&gt;Repeating experiments with newer or different LLMs to test generalisability&lt;/LI&gt;
&lt;LI aria-level="1"&gt;Testing with different target systems beyond code verification&lt;/LI&gt;
&lt;LI aria-level="1"&gt;Supplying SpecKit with tools (e.g. Playwright MCP) to read outputs it currently cannot access, like live webpages or PDFs&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Conclusion&lt;/H2&gt;
&lt;P&gt;Spec-driven development with SpecKit is a useful approach for building complex software with LLMs, but the quality of your prompt determines the quality of your outcome. For the most effective results, write in natural language, keep the whole prompt authoritative, include detail on novel or library-specific components, design your system's outputs to be readable by the model building them, and leave out only what the model can confidently infer.&lt;/P&gt;
&lt;P&gt;If you want to explore the tools used in this project, here are some useful starting points:&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-level="1"&gt;&lt;A href="https://github.com/microsoft/agent-framework" target="_blank" rel="noopener"&gt;Microsoft Agent Framework&lt;/A&gt;&lt;/LI&gt;
&lt;LI aria-level="1"&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/cosmos-db/" target="_blank" rel="noopener"&gt;Azure Cosmos DB documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI aria-level="1"&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/ai-foundry/" target="_blank" rel="noopener"&gt;Azure AI Foundry documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI aria-level="1"&gt;&lt;A href="https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview" target="_blank" rel="noopener"&gt;Anthropic prompt engineering guide&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Mon, 20 Apr 2026 09:14:37 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/educator-developer-blog/prompt-engineering-for-spec-driven-development-with-speckit/ba-p/4512622</guid>
      <dc:creator>charykn</dc:creator>
      <dc:date>2026-04-20T09:14:37Z</dc:date>
    </item>
    <item>
      <title>Minecraft Education Lesson Plans in Teach: AI-powered lesson planning meets the world of Minecraft</title>
      <link>https://techcommunity.microsoft.com/t5/education-blog/minecraft-education-lesson-plans-in-teach-ai-powered-lesson/ba-p/4510917</link>
      <description>&lt;P&gt;As educators, you've told us that some of your most time-consuming work is adapting lessons for engagement, aligning them to standards, and finding ways to bring immersive experiences into your curriculum. At the same time, Minecraft Education is already one of the most effective learning tools for engaging learners in classrooms around the world, with students lighting up the moment they hear the word "Minecraft."&lt;/P&gt;
&lt;P&gt;Today, we're bringing those two things together. &lt;STRONG&gt;Minecraft Education lesson plans are now generally available in Teach.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Describe your topic, pick a grade level and subject, and Teach generates a complete, standards-aligned lesson plan built around Minecraft Education activities, including the specific blocks, materials, and preparation steps you need to run it confidently, even if you've never opened Minecraft Education before. (Minecraft Education is included in most Microsoft 365 software subscriptions for schools, so you also likely have full access.)&lt;/P&gt;
&lt;H3&gt;What you get&lt;/H3&gt;
&lt;P&gt;Every generated Minecraft Education lesson plan includes:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Standards-aligned Minecraft Education activities&lt;/STRONG&gt; - Build activities and challenges that reflect your selected standards across subjects like ELA, math, science, social studies, computer science, and more&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Minecraft-specific materials guidance&lt;/STRONG&gt; - Recommendations for the exact blocks, items, and in-game tools your students will need, so you don't have to figure it out yourself&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Preparation instructions&lt;/STRONG&gt; - Step-by-step setup guidance for educators new to Minecraft Education, so you can walk into the classroom ready to go&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Differentiation and collaboration&lt;/STRONG&gt; - Tiered challenge options, collaborative build tasks, and formative checks embedded within gameplay&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;A student link&lt;/STRONG&gt; - A shareable link to send directly to students so they can join the activity&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;See it in action&lt;/H3&gt;
&lt;div data-video-id="https://www.youtube.com/watch?v=bzJ57AEfi2g/1776967047192" data-video-remote-vid="https://www.youtube.com/watch?v=bzJ57AEfi2g/1776967047192" class="lia-video-container lia-media-is-center lia-media-size-large"&gt;&lt;iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FbzJ57AEfi2g%3Ffeature%3Doembed&amp;amp;display_name=YouTube&amp;amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DbzJ57AEfi2g&amp;amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FbzJ57AEfi2g%2Fhqdefault.jpg&amp;amp;type=text%2Fhtml&amp;amp;schema=youtube" allowfullscreen="" style="max-width: 100%"&gt;&lt;/iframe&gt;&lt;/div&gt;
&lt;P&gt;Once your lesson is generated, you can edit any section directly or use&amp;nbsp;&lt;STRONG&gt;Enhance with AI&lt;/STRONG&gt; to refine it further: add collaborative build tasks, adjust the length and tone, include accessibility supports, or regenerate with new instructions. When it's ready, save to OneDrive and open it in Word to share with colleagues, or launch the Minecraft Education app directly to set up the lesson experience.&lt;/P&gt;
&lt;P&gt;For a full walkthrough of every step, &lt;A class="lia-external-url" href="https://aka.ms/teach/minecraftlessonssupport" target="_blank" rel="noopener"&gt;see the support article&lt;/A&gt;.&lt;/P&gt;
&lt;H3&gt;Why this matters&lt;/H3&gt;
&lt;P&gt;We know many of you already love using Minecraft Education in your classrooms, while others are curious how Minecraft can enhance your teaching to deepen student learning and engagement. Minecraft Education lesson plans in Teach make it easier to create experiences by generating a complete, customized lesson from your topic and standards, with the Minecraft-specific materials, activities, and preparation guidance built in.&lt;/P&gt;
&lt;P&gt;Whether you're looking for a fresh lesson idea in a subject you haven't tried with Minecraft Education before, or you want to quickly adapt a concept for a different grade level, this tool gives you a starting point you can make your own. You bring the teaching expertise and your knowledge of your students.&lt;/P&gt;
&lt;H3&gt;Get started&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Try it now:&lt;/STRONG&gt; &lt;A class="lia-external-url" href="https://m365.cloud.microsoft/teach?create=minecraftlessons&amp;amp;from=blog" target="_blank" rel="noopener"&gt;Minecraft Education lesson plan&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Available to Faculty/Staff with a &lt;STRONG&gt;Microsoft 365 for Education&lt;/STRONG&gt; license and &lt;STRONG&gt;Copilot Chat&lt;/STRONG&gt; enabled&lt;/LI&gt;
&lt;LI&gt;Does &lt;STRONG&gt;not&lt;/STRONG&gt; require a paid Microsoft 365 Copilot license&lt;/LI&gt;
&lt;LI&gt;Minecraft Education may already be included with your Microsoft 365 license or can be purchased separately. &lt;A class="lia-external-url" href="https://education.minecraft.net/en-us/licensing" target="_blank" rel="noopener"&gt;Check your licensing options.&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Helpful Links&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://aka.ms/teach/training" target="_blank" rel="noopener"&gt;Teach module training on Microsoft Learn&lt;/A&gt;, now including training on Minecraft Education lesson generation&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://education.minecraft.net/trainings" target="_blank" rel="noopener"&gt;Training courses for Minecraft educators&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Have questions or ideas? Drop them in the comments below - We'd love to hear how you plan to use Minecraft Education lesson plans in your classroom!&lt;/P&gt;
&lt;P&gt;Share your feedback with us by joining our &lt;A class="lia-external-url" href="https://aka.ms/joinEIP" target="_blank" rel="noopener"&gt;EDU Insider Program&lt;/A&gt; (aka.ms/joinEIP).&lt;/P&gt;
&lt;P&gt;Until next time,&lt;/P&gt;
&lt;P&gt;Max Fritz · Microsoft Education&lt;/P&gt;</description>
      <pubDate>Thu, 23 Apr 2026 17:57:37 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/education-blog/minecraft-education-lesson-plans-in-teach-ai-powered-lesson/ba-p/4510917</guid>
      <dc:creator>MaxFritz</dc:creator>
      <dc:date>2026-04-23T17:57:37Z</dc:date>
    </item>
    <item>
      <title>Introducing the 2026 Imagine Cup Top Launch Startup</title>
      <link>https://techcommunity.microsoft.com/t5/student-developer-blog/introducing-the-2026-imagine-cup-top-launch-startup/ba-p/4510342</link>
      <description>&lt;H4&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Early momentum. Clear direction.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The Launch path highlights student founders who are at an earlier stage but already showing strong signals in how they are approaching what they are building.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;L-Guard Ltd. stood out for how clearly the problem was defined, how intentionally the solution is taking shape, and the direction it is heading next.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;As the Top Launch Startup, L-Guard Ltd. receives $50,000 USD&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt; along with continued visibility and support from Microsoft as&amp;nbsp;they move&amp;nbsp;their solution forward.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4 aria-level="3"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;Meet the startup&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;335559738&amp;quot;:280,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H4&gt;
&lt;H4&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;L-Guard&amp;nbsp;Ltd.: AI-powered road safety, built for real-time response&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;Rwanda&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;L-Guard&amp;nbsp;Ltd.&amp;nbsp;is addressing a critical gap in road safety across Africa, where many accident victims lose their lives not from the crash itself, but from delayed emergency response.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The startup has built an AI- and IoT-powered system that&amp;nbsp;monitors&amp;nbsp;vehicle activity, detects crashes in real time, and automatically alerts nearby hospitals and emergency responders. By combining sensor data with machine learning models on Azure, L-Guard transforms real-time vehicle signals into actionable emergency intelligence.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This shifts road safety from reactive response to proactive intervention, issuing risky driving warnings, detecting incidents as they happen, and ensuring that help is activated as quickly as possible, even in low-connectivity environments.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;As the startup continues to move from pilot validation toward broader deployment, the focus is on strengthening reliability, expanding partnerships, and scaling across high-risk transport markets. By making&amp;nbsp;timely&amp;nbsp;rescue the standard, L-Guard is working to reduce preventable fatalities and bring more accountability to emergency response systems.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Helen&amp;nbsp;Ugoeze&amp;nbsp;Okereke&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;– Growing up in Ebonyi State, Nigeria, Helen set out to become what she called a “computer wizard,” focused on building real solutions with technology. Today, she leads L-Guard’s vision and strategy, driven by a mission to use technology to save lives.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Ramadhani&amp;nbsp;Wanjenja&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;– With a background in embedded systems and intelligent hardware, Ramadhani leads the technical architecture of L-Guard. His personal experience surviving a motorcycle accident shaped the direction of the solution and its focus on immediate response.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Terry Manzi&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;– Raised in Kigali, Terry brings a systems and operations mindset, leading software-hardware integration, deployment, and partnerships to ensure L-Guard works effectively in real environments.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Erioluwa&amp;nbsp;Olowoyo&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;– With a focus on product design and user experience, Erioluwa ensures L-Guard&amp;nbsp;remains&amp;nbsp;intuitive and accessible. His path into technology was self-driven, shaped by a commitment to building solutions that work for real users in real contexts.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4 aria-level="3"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;What this&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;represents&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:false,&amp;quot;134245529&amp;quot;:false,&amp;quot;335559738&amp;quot;:280,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The Top Launch startup reflects what it means to build with intention from the start.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This is not about having everything finished. It is about&amp;nbsp;identifying&amp;nbsp;a real problem, building toward a solution, and continuing to move forward with clarity and purpose.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;As L-Guard&amp;nbsp;Ltd.&amp;nbsp;continues to develop, their work highlights the impact student founders can have when they combine technical&amp;nbsp;skill&amp;nbsp;with lived experience and a clear mission.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Partner tools behind the build&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Alongside mentorship and community, Imagine Cup startups gain access to tools that support how their solutions continue to take shape.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Through &lt;A class="lia-external-url" href="https://github.com/education" target="_blank"&gt;GitHub Education&lt;/A&gt;, teams use the Student Developer Pack, collaborate with AI-assisted coding through Copilot, and build on a platform used by developers around the world.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;With&amp;nbsp;&lt;A class="lia-external-url" href="https://replit.com/?utm_source=google&amp;amp;utm_medium=cpc&amp;amp;utm_campaign=23592956783&amp;amp;utm_term=replit%20coding&amp;amp;utm_content=798115366017&amp;amp;utm_adgroup=196654857034&amp;amp;matchtype=b&amp;amp;network=g&amp;amp;device=c&amp;amp;gclid=Cj0KCQjwv-LOBhCdARIsAM5hdKdADNTlQnJuMeSm-PMyjBW-XPnh8y0zkb1gXi73hMu5SAw8DjXwtt8aArSBEALw_wcB&amp;amp;gad_source=1&amp;amp;gad_campaignid=23592956783&amp;amp;gbraid=0AAAAA-k_HqLsaR4Wt5ZFI_xbcLhXZV-Ry&amp;amp;gclid=Cj0KCQjwv-LOBhCdARIsAM5hdKdADNTlQnJuMeSm-PMyjBW-XPnh8y0zkb1gXi73hMu5SAw8DjXwtt8aArSBEALw_wcB" target="_blank"&gt;Replit&lt;/A&gt;, teams&amp;nbsp;build, test, and deploy using natural language in an AI-powered environment designed for rapid iteration.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Together, these tools give startups the flexibility and support to keep moving forward as they&amp;nbsp;scale&amp;nbsp;their solutions.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 10 Apr 2026 17:24:08 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/student-developer-blog/introducing-the-2026-imagine-cup-top-launch-startup/ba-p/4510342</guid>
      <dc:creator>StudentDeveloperTeam</dc:creator>
      <dc:date>2026-04-10T17:24:08Z</dc:date>
    </item>
    <item>
      <title>Introducing the 2026 Imagine Cup World Finalists</title>
      <link>https://techcommunity.microsoft.com/t5/student-developer-blog/introducing-the-2026-imagine-cup-world-finalists/ba-p/4509670</link>
      <description>&lt;H4&gt;&lt;STRONG&gt;Three startups advancing. One global stage ahead.&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;The defining difference this year was how these startups built their solutions—not just what they built&lt;/P&gt;
&lt;P&gt;Across the semifinals, founders demonstrated a clear understanding of the problems they are solving, how Microsoft AI strengthened their solutions, and where they can go next. This was not an early exploration. This was focused execution.&lt;/P&gt;
&lt;P&gt;The level of clarity, depth, and progress across all semifinalists set a new standard.&lt;/P&gt;
&lt;P&gt;From that group, three startups now move forward to the Imagine Cup World Championship.&lt;/P&gt;
&lt;P&gt;These finalists reflect where the &lt;A class="lia-external-url" href="https://imaginecup.microsoft.com/" target="_blank" rel="noopener"&gt;Imagine Cup&lt;/A&gt; is headed. Student founders are building with real users in mind, thinking beyond prototypes, and developing solutions designed to scale.&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;Meet the startups&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;Listed in alphabetical order:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;CopyFlag: AI-powered creator protection at scale&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;&lt;EM&gt;United Kingdom&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;CopyFlag is addressing a growing challenge in the generative AI era, where original work can be copied, modified, and redistributed at scale, often without creators knowing it is happening.&lt;/P&gt;
&lt;P&gt;The startup has built an Azure AI-powered platform that detects both direct copies and AI-modified versions of designs across the internet and automatically initiates takedowns. This transforms what has traditionally been a manual and expensive process into something creators can actually use, giving them a way to protect their work without requiring significant legal or technical resources.&lt;/P&gt;
&lt;P&gt;Early results show clear demand, with thousands of creators already testing the platform and tens of thousands of infringements identified across marketplaces. By making intellectual property protection more accessible, CopyFlag is helping level the playing field so creators can continue building and growing with confidence.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Patrick Brown&lt;/STRONG&gt; – A final-year Biochemistry student, Patrick combines a background in computer vision with hands-on experience building online businesses. After experiencing copyright infringement firsthand, he set out to build CopyFlag, focused on giving creators and small businesses the tools to detect and protect their work at scale.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;Revora Health: AI-powered recovery, built for real life&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;&lt;EM&gt;United States&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Revora Health is addressing a critical gap in how patients access and experience recovery, where long wait times and limited support often leave individuals navigating rehabilitation on their own.&lt;/P&gt;
&lt;P&gt;The startup has built a recovery marketplace paired with an Azure-powered AI agent that provides 24/7 triage and real-time movement feedback. Using computer vision and multimodal models, Revora enables patients to perform rehabilitation exercises correctly while receiving personalized, contextual guidance throughout their recovery journey. This shifts what has traditionally been a slow, reactive process into something continuous and accessible, giving patients more agency while enabling providers to extend care beyond scheduled sessions.&lt;/P&gt;
&lt;P&gt;Early results show strong engagement, with active users already participating in a growing private beta as the startup works to scale its marketplace model. By combining accessible care with intelligent, real-time support, Revora Health is helping patients recover with greater confidence while creating a more scalable and effective model for physical therapy.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Surya Kukkapalli &lt;/STRONG&gt;– An MBA student, Surya brings together a background in software engineering and firsthand experience as a personal trainer. After seeing the challenges patients face navigating recovery, he set out to build Revora Health to make specialized care more accessible and to give patients the tools and support they need to recover with confidence.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;SpoilSafe: AI-powered freshness intelligence for the cold chain&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;&lt;EM&gt;United States&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;SpoilSafe is addressing a critical gap in the cold chain, where limited visibility into food freshness leads to waste, rejected inventory, and lost revenue.&lt;/P&gt;
&lt;P&gt;The startup has built a food freshness intelligence platform that uses low-cost sensors to detect gases emitted as food begins to spoil, such as ethylene and ammonia, and combines that data with machine learning models to generate real-time freshness scores and time-to-spoilage predictions. This shifts cold chain management from reactive monitoring to proactive decision-making, giving operators clear insight into what inventory is at risk and what actions to take.&lt;/P&gt;
&lt;P&gt;By moving beyond traditional temperature and humidity tracking, SpoilSafe enables earlier intervention, helping reduce waste while improving operational efficiency across warehouses, distributors, and retailers.&lt;/P&gt;
&lt;P&gt;As the startup continues to develop its MVP and expand pilot programs, the focus is on validating performance across product categories and building a scalable deployment model. By making food spoilage predictable instead of inevitable, SpoilSafe is helping create a more efficient and resilient food supply chain.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Advika Vuppala&lt;/STRONG&gt; – A hands-on builder with experience across robotics and independent research, Advika brings a practical, problem-solving mindset to the startup. She is also committed to expanding access to engineering, leading workshops and initiatives that have engaged thousands of women in tech.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Rohan Ganesh &lt;/STRONG&gt;– A self-taught builder, Rohan developed his skills by experimenting with new technologies and learning by doing. He brings adaptability and speed to product development, iterating quickly while keeping the larger system in focus.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Troy McBride&lt;/STRONG&gt; – With a strong foundation in math and analytical thinking, Troy approaches challenges with structure and precision. He focuses on breaking down complex systems into clear, solvable components, ensuring the startup’s work is both ambitious and technically sound.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Vivaan Sawant&lt;/STRONG&gt; – Driven by curiosity and discipline, Vivaan focuses on building systems that balance performance with real-world impact. He brings a mindset of continuous improvement, helping shape solutions that are designed to scale and hold up in real environments.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;What’s Next&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;From here, finalists continue building. Refining their product. Strengthening how they communicate the value of what they have created.&lt;/P&gt;
&lt;P&gt;Through ongoing mentorship, they will work closely with experienced founders, engineers, and industry leaders to sharpen both their technology and their positioning ahead of the global stage.&lt;/P&gt;
&lt;P&gt;At the World Championship, one team will be named the 2026 Imagine Cup World Champion, receiving $100,000 USD, a mentorship session with Microsoft Chairman and CEO Satya Nadella, and opportunities for deeper partnership with Microsoft for Startups to continue building and scaling what comes next.&lt;/P&gt;
&lt;P&gt;The World Championship winners will be announced at &lt;A class="lia-external-url" href="https://build.microsoft.com/en-US/home" target="_blank" rel="noopener"&gt;Microsoft Build&lt;/A&gt; on June 2nd. Join us on&amp;nbsp;&lt;A class="lia-external-url" href="https://www.linkedin.com/showcase/microsoft-imagine-cup/" target="_blank" rel="noopener"&gt;LinkedIn&lt;/A&gt;,  &lt;A class="lia-external-url" href="https://www.instagram.com/microsoftimaginecup/" target="_blank" rel="noopener"&gt;Instagram&lt;/A&gt;, &lt;A class="lia-external-url" href="https://twitter.com/MSFTImagine" target="_blank" rel="noopener"&gt;X &lt;/A&gt;or &lt;A class="lia-external-url" href="https://www.facebook.com/MSFTImagine" target="_blank" rel="noopener"&gt;Facebook&lt;/A&gt;, as we follow their journey to the championship.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;Partner tools behind the build&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;Alongside mentorship and community, Imagine Cup startups gain access to tools that support how their solutions continue to take shape.&lt;/P&gt;
&lt;P&gt;Through &lt;A class="lia-external-url" href="https://github.com/education" target="_blank" rel="noopener"&gt;GitHub Education&lt;/A&gt;, teams use the Student Developer Pack, collaborate with AI-assisted coding through Copilot, and build on a platform used by developers around the world.&lt;/P&gt;
&lt;P&gt;With &lt;A class="lia-external-url" href="https://replit.com/?utm_source=google&amp;amp;utm_medium=cpc&amp;amp;utm_campaign=23696010374&amp;amp;utm_term=replit%20agent&amp;amp;utm_content=802343030302&amp;amp;utm_adgroup=203224630708&amp;amp;matchtype=p&amp;amp;network=g&amp;amp;device=c&amp;amp;gclid=CjwKCAjwnN3OBhA8EiwAfpTYekGxtG1UkHKb8Vq5iYNX0q6tTmkguG0x5DDVjOOTw_78lvO1oBXgdBoCTioQAvD_BwE&amp;amp;gad_source=1&amp;amp;gad_campaignid=23696010374&amp;amp;gbraid=0AAAAA-k_HqKyqSjJrQbAMXOZ1Ro_8IaSE&amp;amp;gclid=CjwKCAjwnN3OBhA8EiwAfpTYekGxtG1UkHKb8Vq5iYNX0q6tTmkguG0x5DDVjOOTw_78lvO1oBXgdBoCTioQAvD_BwE" target="_blank" rel="noopener"&gt;Replit&lt;/A&gt;, teams build, test, and deploy using natural language in an AI-powered environment designed for rapid iteration.&lt;/P&gt;
&lt;P&gt;Together, these tools give startups the flexibility and support to keep moving forward as they scale their solutions.&lt;/P&gt;</description>
      <pubDate>Fri, 10 Apr 2026 15:36:18 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/student-developer-blog/introducing-the-2026-imagine-cup-world-finalists/ba-p/4509670</guid>
      <dc:creator>StudentDeveloperTeam</dc:creator>
      <dc:date>2026-04-10T15:36:18Z</dc:date>
    </item>
    <item>
      <title>Classic LTI App Retirements, Preview of OneDrive LTI Migration Tool for Canvas</title>
      <link>https://techcommunity.microsoft.com/t5/education-blog/classic-lti-app-retirements-preview-of-onedrive-lti-migration/ba-p/4509380</link>
      <description>&lt;H4&gt;&lt;BR /&gt;&lt;STRONG&gt;Classic Microsoft LTI® Apps Retiring in 2026: What You Need to Know and How to Prepare&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;Microsoft is continuing its investment in a unified, modern &lt;STRONG&gt;&lt;A class="lia-external-url" href="https://aka.ms/M365LTIGABlog" target="_blank" rel="noopener"&gt;Microsoft 365 LTI&lt;/A&gt;&lt;/STRONG&gt; experience. As part of this evolution, several classic Microsoft LTI apps will be retired in &lt;STRONG&gt;September 2026&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;This post outlines:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Which classic LTI apps are retiring and when&lt;/LI&gt;
&lt;LI&gt;What happens to existing course links and content created in classic LTIs retiring&lt;/LI&gt;
&lt;LI&gt;What actions you should take now to prepare, and start transitioning to Microsoft 365 LTI&lt;/LI&gt;
&lt;LI&gt;New migration tooling available to support transition&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4&gt;&lt;STRONG&gt;Classic Microsoft LTI® Apps Retiring September 17, 2026&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;As we shared last September in our &lt;A class="lia-external-url" href="https://aka.ms/M365LTIGABlog" target="_blank" rel="noopener"&gt;Microsoft 365 LTI GA release Blog&lt;/A&gt;, the following &lt;STRONG&gt;classic Microsoft LTI apps will be&lt;/STRONG&gt; &lt;STRONG&gt;retired on September 17, 2026&lt;/STRONG&gt;:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Microsoft OneDrive LTI&lt;/STRONG&gt; (1.3)&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;OneNote Class Notebook LTI&lt;/STRONG&gt; (1.1)&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Microsoft Reflect LTI&lt;/STRONG&gt; (1.3)&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Microsoft Teams Assignments LTI&lt;/STRONG&gt; (1.3)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;After September 17, 2026, any links or placements of these classic apps in courses will stop working.&lt;/STRONG&gt; However, the files, notebooks, assignments, and check-ins created by these classic apps will continue to be available to copy and reuse.&lt;/P&gt;
&lt;P&gt;Replacements for these classic experiences are now available through the unified&lt;STRONG&gt; &lt;/STRONG&gt;&lt;A class="lia-external-url" href="https://aka.ms/LMSAdminDocs" target="_blank" rel="noopener"&gt;Microsoft 365 LTI&lt;/A&gt; built on the &lt;A class="lia-external-url" href="https://www.1edtech.org/standards/lti" target="_blank" rel="noopener"&gt;LTI® 1.3 Advantage standard&lt;/A&gt;. This delivers modern security, simplified identity mapping with Microsoft Entra, LMS enrollment and grade syncing, and a single deployment model for LMS administrators. We’ll continue to update our migration guides as additional tools and guidance become available.&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;NEW: Preview the OneDrive LTI Migration Tool for Canvas&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;&lt;STRONG&gt;Canvas LMS Customers: &lt;/STRONG&gt;We are excited to announce that the &lt;A class="lia-external-url" href="https://aka.ms/CanvasMigrationGuide" target="_blank" rel="noopener"&gt;Microsoft OneDrive LTI Migration Tool for Canvas&lt;/A&gt;&amp;nbsp;is now available in Preview!&lt;BR /&gt;This tool helps institutions using Canvas LMS migrate OneDrive content links from the classic Microsoft OneDrive LTI app to the new Microsoft 365 LTI app — preserving existing file links in courses so educators and students experience a seamless transition.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;For new preview deployments:&lt;/STRONG&gt; detailed deployment instructions are available in the &lt;A class="lia-external-url" href="https://aka.ms/CanvasMigrationGuide" target="_blank" rel="noopener"&gt;Canvas migration guide&lt;/A&gt;, which has been updated with configuration steps and guidance for using the migration tool.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;If you participated in the private preview: &lt;/STRONG&gt;If you have already deployed the OneDrive LTI Migration Tool in Canvas during the private preview, no action is required&lt;STRONG&gt;.&lt;/STRONG&gt; Your existing deployment will continue to work as part of the Public Preview, and in GA. If you deployed the private preview in a testing environment, we suggest that you follow the new &lt;A class="lia-external-url" href="https://aka.ms/CanvasMigrationGuide" target="_blank" rel="noopener"&gt;Canvas migration guide&lt;/A&gt; in your production environment.&lt;/P&gt;
&lt;P&gt;Below is guidance to assist with transition from the other classic LTI apps and on additional LMS platforms. We will continue to communicate updates to this guidance as it evolves.&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;If you use the classic Microsoft OneDrive LTI 1.3 with an LMS other than Canvas&lt;/STRONG&gt;&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://aka.ms/LMSAdminDocs" target="_blank" rel="noopener"&gt;Deploy Microsoft 365 LTI&lt;/A&gt; with the OneDrive app enabled and guide educators to use the new Microsoft 365 LTI (Microsoft Education menus) to create file links or embeds in course content.&lt;/LI&gt;
&lt;LI&gt;Disable/hide/remove placements of the classic Microsoft OneDrive LTI app in your LMS but do not uninstall or disable the app.&lt;/LI&gt;
&lt;LI&gt;Files linked or embedded with the classic Microsoft OneDrive LTI will stop working when the app is retired, so those links and embeds must be replaced using the new Microsoft 365 LTI (Microsoft Education) app ahead of the retirement date.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4&gt;&lt;STRONG&gt;OneNote Class Notebook LTI 1.1 (All LMS platforms)&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;The &lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/educationblog/new-onenote-class-notebook-lti-1-3-integration-in-the-microsoft-365-lti-app/4469797" target="_blank" rel="noopener" data-lia-auto-title="new OneNote Class Notebook LTI 1.3 integration" data-lia-auto-title-active="0"&gt;new OneNote Class Notebook LTI 1.3 integration &lt;/A&gt;is now available in the Microsoft 365 LTI app, with automatic roster sync and streamlined setup.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://aka.ms/LMSAdminDocs" target="_blank" rel="noopener"&gt;Deploy Microsoft 365 LTI&lt;/A&gt; with the OneNote Class Notebook app enabled, and guide educators to use the new app.&lt;/LI&gt;
&lt;LI&gt;Disable/hide/remove placements of the classic OneNote integration, but&lt;STRONG&gt; &lt;/STRONG&gt;do not uninstall the app to avoid migration issues during transition.&lt;SPAN class="lia-text-color-21"&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;While there is no direct migration path from OneNote Class Notebook LTI 1.1 notebooks to Microsoft 365 LTI Class Notebooks, educators can &lt;STRONG&gt;copy sections/pages from one notebook to another&lt;/STRONG&gt; using the right-click menu on Sections and Pages (and selecting “Move/Copy”) in OneNote on Windows, OneNote Web, and OneNote for Mac. Instructions are also available for content transfer using OneNote on &lt;A class="lia-external-url" href="https://support.microsoft.com/office/move-or-copy-notes-in-onenote-for-mac-7faf1c7f-d6c6-420e-a65c-5ac7c6f6ec27" target="_blank" rel="noopener"&gt;Mac&lt;/A&gt;, &lt;A class="lia-external-url" href="https://support.microsoft.com/office/move-or-copy-notes-between-notebooks-and-sections-in-onenote-for-ipad-or-iphone-94a516da-35f1-46b4-9ed6-a7c712324bab" target="_blank" rel="noopener"&gt;iOS&lt;/A&gt;, or &lt;A class="lia-external-url" href="https://support.microsoft.com/office/move-or-copy-notes-between-notebooks-and-sections-in-onenote-for-ipad-or-iphone-94a516da-35f1-46b4-9ed6-a7c712324bab" target="_blank" rel="noopener"&gt;Android&lt;/A&gt;.&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4&gt;&lt;STRONG&gt;Microsoft Teams Assignments LTI 1.3 (All LMS platforms)&lt;/STRONG&gt;&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://aka.ms/LMSAdminDocs" target="_blank" rel="noopener"&gt;Deploy Microsoft 365 LTI&lt;/A&gt; with the Assignments app enabled, and guide educators to create assignments using the new app.&lt;/LI&gt;
&lt;LI&gt;Disable/hide/remove placements of the legacy Teams Assignments LTI app as soon as you install the new Microsoft 365 LTI and enable the Assignments app, and guide you users to copy their existing assignments using the new app.&lt;/LI&gt;
&lt;LI&gt;Teams Assignments created by the classic LTI 1.3 app can be reused as in the new Microsoft 365 LTI Assignments experience (which does not require a Team)&lt;/LI&gt;
&lt;LI&gt;Assignments created in the LMS or via the Assignments app in Microsoft Teams can be copied and reused using the &lt;STRONG&gt;Create from Existing&lt;/STRONG&gt; functionality in the Microsoft 365 LTI (Microsoft Education) Assignment instructor flow.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4&gt;&lt;STRONG&gt;Microsoft Reflect LTI 1.3 (All LMS platforms)&lt;/STRONG&gt;&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://aka.ms/LMSAdminDocs" target="_blank" rel="noopener"&gt;Deploy Microsoft 365 LTI&lt;/A&gt; with the Reflect app enabled, and guide educators to create new Reflects in the new Microsoft 365 LTI experience.&lt;/LI&gt;
&lt;LI&gt;There is no migration path for reflects created in the classic Reflect LTI 1.3 app to the Reflect experience in the new Microsoft 365 LTI Reflect app.&lt;/LI&gt;
&lt;LI&gt;We recommend transitioning to the new Reflect experience in Microsoft 365 LTI as soon as possible, and remove the classic app ahead of the September 17, 2026 retirement.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4&gt;&lt;STRONG&gt;Stay Connected&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;We love hearing from you! There are a few ways to stay engaged with Microsoft and your peers on the LMS integrations.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Follow this blog! &lt;/STRONG&gt;Click Register at the top right to create an account and profile for the Microsoft Tech Community and Follow the Education Blog so you don’t miss any of our updates.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Join the free&amp;nbsp;&lt;/STRONG&gt;&lt;A class="lia-external-url" href="https://aka.ms/joinEIP" target="_blank" rel="noopener"&gt;Education Insiders Program&lt;/A&gt; to preview updates, get support from other community members, meet the team, and influence the roadmap.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Join us for Microsoft 365 LTI office hours&lt;/STRONG&gt; to connect with your peers and share feedback directly with Microsoft experts.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&amp;nbsp; &amp;nbsp; When: 1st and 3rd Thursday of each month @ 11AM EST&lt;BR /&gt;&amp;nbsp; &amp;nbsp; Where: https://aka.ms/LTIOfficeHours&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Getting help and giving feedback&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;LMS and Microsoft 365 admins can contact Microsoft&amp;nbsp;&lt;A class="lia-external-url" href="https://aka.ms/edusupport" target="_blank" rel="noopener"&gt;Education Support&lt;/A&gt;&amp;nbsp;to help resolve configuration and deployment issues, for themselves or on behalf of users.&lt;/LI&gt;
&lt;LI&gt;Educators and Learners can contact support or give feedback directly from the app through the help and feedback menu.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-clear-both"&gt;TJ Vering&lt;BR /&gt;Principal Product Manager&lt;BR /&gt;Microsoft Education&lt;BR /&gt;https://linkedin.com/in/tvering&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Learning Tools Interoperability® (LTI®) is a trademark of the 1EdTech Consortium, Inc. (&lt;A class="lia-external-url" href="https://1edtech.org/" target="_blank" rel="noopener"&gt;https://1edtech.org/&lt;/A&gt;)&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 08 Apr 2026 13:30:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/education-blog/classic-lti-app-retirements-preview-of-onedrive-lti-migration/ba-p/4509380</guid>
      <dc:creator>tjvering</dc:creator>
      <dc:date>2026-04-08T13:30:00Z</dc:date>
    </item>
    <item>
      <title>New information literacy features in Search Progress now generally available</title>
      <link>https://techcommunity.microsoft.com/t5/education-blog/new-information-literacy-features-in-search-progress-now/ba-p/4508941</link>
      <description>&lt;P&gt;Hello all!&lt;/P&gt;
&lt;P&gt;Last September, we shared a preview of &lt;A class="lia-external-url" href="https://techcommunity.microsoft.com/blog/educationblog/empowering-learners-for-the-age-of-ai-new-information-literacy-features-coming-t/4443052" target="_blank" rel="noopener"&gt;new information literacy features coming to Search Progress&lt;/A&gt; — designed to help students pause, think critically, and show their reasoning as they research online. Today, we’re excited to share that these features are&amp;nbsp;&lt;STRONG&gt;generally available&lt;/STRONG&gt; for all educators using Search Progress through Assignments in &lt;A class="lia-external-url" href="https://support.microsoft.com/topic/create-an-assignment-in-microsoft-teams-23c128d0-ec34-4691-9511-661fba8599be" target="_blank" rel="noopener"&gt;Teams for Education&lt;/A&gt; and the &lt;A class="lia-external-url" href="https://learn.microsoft.com/microsoft-365/lti/?view=o365-worldwide" target="_blank" rel="noopener"&gt;Microsoft 365 LTI®&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;A special thank you to the educators who participated in the preview and shared feedback along the way; your insights helped shape these features into what they are today.&lt;/P&gt;
&lt;H2&gt;See it in action&lt;/H2&gt;
&lt;P&gt;Want a walkthrough before reading the details? Watch our Elevate Signature Series session, “Show Me Your Thinking,” where Dr. Geri Gillespy and I discuss future ready skills along with Search Progress setup, the full educator-to-student workflow, and how these skills connect to global assessment frameworks like PISA 2029.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;div data-video-id="https://youtu.be/oIS5UO7Wr6U?si=ZTB2ouBq1j2LwthO/1775501633897" data-video-remote-vid="https://youtu.be/oIS5UO7Wr6U?si=ZTB2ouBq1j2LwthO/1775501633897" class="lia-video-container lia-media-is-center lia-media-size-large"&gt;&lt;iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FoIS5UO7Wr6U%3Ffeature%3Doembed&amp;amp;display_name=YouTube&amp;amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DoIS5UO7Wr6U&amp;amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FoIS5UO7Wr6U%2Fhqdefault.jpg&amp;amp;type=text%2Fhtml&amp;amp;schema=youtube" allowfullscreen="" style="max-width: 100%"&gt;&lt;/iframe&gt;&lt;/div&gt;
&lt;H2&gt;Why process matters more than ever&lt;/H2&gt;
&lt;P&gt;Information literacy skills like verifying sources, understanding context, and thinking critically are foundational for responsible and effective navigation of online information. These skills become even more critical as AI becomes an integral part of learning and daily life, where students don’t just need access to information, they need to know how to evaluate it.&lt;/P&gt;
&lt;P&gt;To ensure these features were developed in alignment with the latest in online reasoning research, we consulted with experts from the &lt;A class="lia-external-url" href="https://www.inquirygroup.org/about" target="_blank" rel="noopener"&gt;Digital Inquiry Group&lt;/A&gt; — a team with decades of experience as curriculum designers, classroom educators, researchers, and teacher educators — recognized with awards from UNESCO, the American Educational Research Association, and the School Library Association, to name a few.&lt;/P&gt;
&lt;H2&gt;What’s now available&lt;/H2&gt;
&lt;P&gt;The enhanced Search Progress features introduce structured activities and checkpoints — cognitive forcing functions that encourage students to pause, consider, and articulate their reasoning as they navigate the complex world of online information. Here’s what you can now enable for your assignments:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Evaluating source reputability&lt;/STRONG&gt;: Instead of relying solely on what a source says about itself, students investigate the individuals or organizations behind the information by looking into what other sources say about them, like how employers use references in a job interview.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Cross-checking and lateral reading&lt;/STRONG&gt;: “Using the internet to check the internet”, students compare information and perspectives across multiple sources to reveal patterns, differences, and possible inaccuracies.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Impact awareness&lt;/STRONG&gt;: Students consider what could be at risk if the information is inaccurate or fabricated with the new "factual importance" checkpoint. For instance, health advice carries different consequences than an AI-generated image of a cat dancing at the disco.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Identifying source purpose&lt;/STRONG&gt;: Information is created for a reason. Students consider who created a source, and whether it’s trying to inform, persuade, sell, or entertain.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Metacognitive reflection&lt;/STRONG&gt;: Students reflect on the research process itself including why certain sources stood out, which strategies worked best, and how to apply those learnings in the future.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Not just for research projects&lt;/H2&gt;
&lt;P&gt;These features aren’t only for formal research assignments. They’re designed for class activities that involve online research, whether students are exploring a new topic, gathering sources for a presentation, or verifying information for a discussion. The goal is to build habits that transfer throughout the digital information ecosystem, from navigating social media to evaluating AI-generated content. For example:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;A science educator assigns a pre-lab research task on chemical reactions. By enabling Source Reputation and Factual Importance, students learn to prioritize safety data sheets and academic sources over unverified blogs and to think about why accuracy matters when the stakes are high.&lt;/LI&gt;
&lt;LI&gt;A social studies educator uses Cross-check for an assignment focusing on current events. Students discover that a viral statistic has been reported differently across sources, and they practice tracing claims back to their origin — building lateral reading habits they’ll carry into their media consumption outside of school.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;What educators are saying&lt;/H2&gt;
&lt;P&gt;Teacher librarians, in particular, have told us that the “process over product” approach gives them something they’ve been missing — visibility into the process of student inquiry, not just what they turn in. These features give them a window into the journey, not just the destination. With new scaffolds that support&amp;nbsp;&lt;STRONG&gt;cross-checking&lt;/STRONG&gt; and the investigation of &lt;STRONG&gt;source reputation&lt;/STRONG&gt;, Search Progress now covers more of the skills they’ve been trying to teach.&lt;/P&gt;
&lt;P&gt;We’ve heard from educators that the explanation prompts reveal a side of student thinking that traditional assignments don't often capture. During an early pilot, students pushed back on a text field that didn’t scroll to expand, not because they wanted less writing, but because they had more to say about why they chose their sources and wanted more space to explain their thinking. Students who described themselves as not being strong essay writers found a different way to show their thinking, and when they knew that their reasoning mattered as much as the final product, it changed how they engaged with the assignment.&lt;/P&gt;
&lt;H2&gt;Preparing students with future-ready skills for the age of AI&lt;/H2&gt;
&lt;P&gt;As educators worldwide work to build students’ information literacy skills, global frameworks are evolving to match. The OECD recently published a &lt;A class="lia-external-url" href="https://www.oecd.org/en/about/projects/pisa-2029-media-and-artificial-intelligence-literacy.html" target="_blank" rel="noopener"&gt;first draft of the PISA 2029 Media and Artificial Intelligence Literacy (MAIL) assessment framework&lt;/A&gt; — a new assessment that will measure 15-year-olds’ ability to critically evaluate digital and AI-generated content across all participating countries.&lt;/P&gt;
&lt;P&gt;We were interested to see how closely the skills that Search Progress helps build align with the competences this framework describes. The MAIL assessment places significant emphasis on evaluating source credibility, assessing purpose and bias, and cross-checking information across multiple sources — all skills that Search Progress is designed to support through structured activities and checkpoints in the flow of research.&lt;/P&gt;
&lt;P&gt;Educators have also shared that these features help address a tension many are navigating right now: how to maintain academic integrity when AI-generated work is increasingly difficult to distinguish from student work. Rather than relying on detection tools at the end of the pipeline, Search Progress makes the research process itself the artifact, which gives educators evidence of student thinking throughout. Of course, information literacy is broader than any single tool. The MAIL framework also includes competences around content creation and collaborative digital participation that go beyond what Search Progress currently addresses. But for the core skill of analysing and evaluating online information — which the framework highlights as one of its most heavily weighted competences — Search Progress can help you give your students meaningful practice right now.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;By integrating these research habits into everyday assignments, you’re helping students build skills that will serve them well beyond any single assessment — from navigating social media to evaluating AI-generated content in their daily lives.&lt;/P&gt;
&lt;H2&gt;Getting started&lt;/H2&gt;
&lt;OL&gt;
&lt;LI&gt;Open &lt;STRONG&gt;Assignments &lt;/STRONG&gt;in Teams for Education (or your LMS via the Microsoft 365 LTI).&lt;/LI&gt;
&lt;LI&gt;Create a new assignment and select &lt;STRONG&gt;Search Progress&lt;/STRONG&gt; as a Learning Accelerator.&lt;/LI&gt;
&lt;LI&gt;Choose which information literacy features to enable for this assignment; you can mix and match based on the lesson.&lt;/LI&gt;
&lt;LI&gt;Customize the checkpoint card prompts to fit your subject area and grade level.&lt;/LI&gt;
&lt;LI&gt;Assign it to your class and watch the research process unfold.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;Requirements&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;Available to all Microsoft 365 Education customers&lt;/LI&gt;
&lt;LI&gt;Classes set up in Teams for Education or the Microsoft 365 LTI&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Helpful links&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;📘 &lt;A class="lia-external-url" href="https://learn.microsoft.com/training/modules/teach-information-literacy-search-coach-search-progress/" target="_blank" rel="noopener"&gt;Take the MS Learn course&lt;/A&gt; — Intro course for educators&lt;/LI&gt;
&lt;LI&gt;📘 &lt;A class="lia-external-url" href="https://learn.microsoft.com/microsoft-365/lti/?view=o365-worldwide" target="_blank" rel="noopener"&gt;Microsoft 365 LTI app overview&lt;/A&gt; — Bring Search Progress into your LMS&lt;/LI&gt;
&lt;LI&gt;💬 &lt;A class="lia-external-url" href="https://aka.ms/JoinEIP" target="_blank" rel="noopener"&gt;Join the Education Insiders Program&lt;/A&gt; — Share feedback directly with our product team&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;We’re committed to helping you foster information and AI literacy, and your feedback continues to shape how these tools evolve. Join the Search Progress channel in the Education Insiders Program to connect with other educators, attend community calls, and share your experience directly with the product team. If you’re not yet an EIP member, sign up here: &lt;A class="lia-external-url" href="https://aka.ms/JoinEIP" target="_blank" rel="noopener"&gt;aka.ms/JoinEIP&lt;/A&gt;. &lt;BR /&gt;&lt;BR /&gt;Have questions or ideas? Drop them in the comments below. I’d love to hear how you’re using these features in your classroom!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Until next time,&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Emma Gray&lt;BR /&gt;&lt;/STRONG&gt;Product Manager II&lt;BR /&gt;&lt;STRONG&gt;Microsoft Education&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Learning Tools Interoperability® (LTI®) is a trademark of the 1EdTech Consortium, Inc. (&lt;A class="lia-external-url" href="https://1edtech.org/" target="_blank" rel="noopener"&gt;1edtech.org&lt;/A&gt;)&lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 06 Apr 2026 22:07:19 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/education-blog/new-information-literacy-features-in-search-progress-now/ba-p/4508941</guid>
      <dc:creator>EmmaGray</dc:creator>
      <dc:date>2026-04-06T22:07:19Z</dc:date>
    </item>
    <item>
      <title>Build and Deploy a Microsoft Foundry Hosted Agent: A Hands-On Workshop</title>
      <link>https://techcommunity.microsoft.com/t5/educator-developer-blog/build-and-deploy-a-microsoft-foundry-hosted-agent-a-hands-on/ba-p/4508426</link>
      <description>&lt;ARTICLE&gt;
&lt;SECTION&gt;&lt;/SECTION&gt;
&lt;SECTION&gt;
&lt;P&gt;Agents are easy to demo, hard to ship.&lt;/P&gt;
&lt;P&gt;Most teams can put together a convincing prototype quickly. The harder part starts afterwards: shaping deterministic tools, validating behaviour with tests, building a CI path, packaging for deployment, and proving the experience through a user-facing interface. That is where many promising projects slow down.&lt;/P&gt;
&lt;P&gt;This workshop helps you close that gap without unnecessary friction. You get a guided path from local run to deployment handoff, then complete the journey with a working chat UI that calls your deployed hosted agent through the project endpoint.&lt;/P&gt;
&lt;/SECTION&gt;
&lt;SECTION&gt;
&lt;H2&gt;What You Will Build&lt;/H2&gt;
&lt;P&gt;This is a hands-on, end-to-end learning experience for building and deploying AI agents with Microsoft Foundry.&lt;/P&gt;
&lt;P&gt;The lab provides a guided and practical journey through hosted-agent development, including deterministic tool design, prompt-guided workflows, CI validation, deployment preparation, and UI integration.&lt;/P&gt;
&lt;P&gt;It’s designed to reduce setup friction with a ready-to-run experience.&lt;/P&gt;
&lt;P&gt;It is a prompt-based development lab using Copilot guidance and MCP-assisted workflow options during deployment.&lt;/P&gt;
&lt;P&gt;It’s a .NET 10 workshop that includes local development, Copilot-assisted coding, CI, secure deployment to Azure, and a working chat UI.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;A local hosted agent that responds on the responses contract&lt;/LI&gt;
&lt;LI&gt;Deterministic tool improvements in core logic with xUnit coverage&lt;/LI&gt;
&lt;LI&gt;A GitHub Actions CI workflow for restore, build, test, and container validation&lt;/LI&gt;
&lt;LI&gt;An Azure-ready deployment path using azd, ACR image publishing, and Foundry manifest apply&lt;/LI&gt;
&lt;LI&gt;A Blazor chat UI that calls openai/v1/responses with agent_reference&lt;/LI&gt;
&lt;LI&gt;A repeatable implementation shape that teams can adapt to real projects&lt;/LI&gt;
&lt;/UL&gt;
&lt;/SECTION&gt;
&lt;SECTION&gt;
&lt;H2&gt;Who This Lab Is For&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;AI developers and software engineers who prefer learning by building&lt;/LI&gt;
&lt;LI&gt;Motivated beginners who want a guided, step-by-step path&lt;/LI&gt;
&lt;LI&gt;Experienced developers who want a practical hosted-agent reference implementation&lt;/LI&gt;
&lt;LI&gt;Architects evaluating deployment shape, validation strategy, and operational readiness&lt;/LI&gt;
&lt;LI&gt;Technical decision-makers who need to see how demos become deployable systems&lt;/LI&gt;
&lt;/UL&gt;
&lt;/SECTION&gt;
&lt;SECTION&gt;
&lt;H2&gt;Why Hosted Agents&lt;/H2&gt;
&lt;P&gt;Hosted agents run your code in a managed environment. That matters because it reduces the amount of infrastructure plumbing you need to manage directly, while giving you a clearer path to secure, observable, team-friendly deployments.&lt;/P&gt;
&lt;P&gt;Prompt-only demos are still useful. They are quick, excellent for ideation, and often the right place to start. Hosted agents complement that approach when you need custom code, tool-backed logic, and a deployment process that can be repeated by a team.&lt;/P&gt;
&lt;P&gt;Think of this lab as the bridge: you keep the speed of prompt-based iteration, then layer in the real-world patterns needed to run reliably.&lt;/P&gt;
&lt;/SECTION&gt;
&lt;SECTION&gt;
&lt;H2&gt;What You Will Learn&lt;/H2&gt;
&lt;H3&gt;1) Orchestration&lt;/H3&gt;
&lt;P&gt;You will practise workflow-oriented reasoning through implementation-shape recommendations and multi-step readiness scenarios. The lab introduces orchestration concepts at a practical level, rather than as a dedicated orchestration framework deep dive.&lt;/P&gt;
&lt;H3&gt;2) Tool Integration&lt;/H3&gt;
&lt;P&gt;You will connect deterministic tools and understand how tool calls fit into predictable execution paths. This is a core focus of the workshop and is backed by tests in the solution.&lt;/P&gt;
&lt;H3&gt;3) Retrieval Patterns (What This Lab Covers Today)&lt;/H3&gt;
&lt;P&gt;This workshop does not include a full RAG implementation with embeddings and vector search. Instead, it focuses on deterministic local tools and hosted-agent response flow, giving you a strong foundation before adding retrieval infrastructure in a follow-on phase.&lt;/P&gt;
&lt;H3&gt;4) Observability&lt;/H3&gt;
&lt;P&gt;You will see light observability foundations through OpenTelemetry usage in the host and practical verification during local and deployed checks. This is introductory coverage intended to support debugging and confidence building.&lt;/P&gt;
&lt;H3&gt;5) Responsible AI&lt;/H3&gt;
&lt;P&gt;You will apply production-minded safety basics, including secure secret handling and review hygiene. A full Responsible AI policy and evaluation framework is not the primary goal of this workshop, but the workflow does encourage safe habits from the start.&lt;/P&gt;
&lt;H3&gt;6) Secure Deployment Path&lt;/H3&gt;
&lt;P&gt;You will move from local implementation to Azure deployment with a secure, practical workflow: azd provisioning, ACR publishing, manifest deployment, hosted-agent start, status checks, and endpoint validation.&lt;/P&gt;
&lt;/SECTION&gt;
&lt;SECTION&gt;
&lt;H2&gt;The Learning Journey&lt;/H2&gt;
&lt;P&gt;The overall flow is simple and memorable: clone, open, run, iterate, deploy, observe.&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;clone -&amp;gt; open -&amp;gt; run -&amp;gt; iterate -&amp;gt; deploy -&amp;gt; observe&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;You are not expected to memorize every command. The lab is structured to help you learn through small, meaningful wins that build confidence.&lt;/P&gt;
&lt;H3&gt;Your First 15 Minutes: Quick Wins&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Open the repo and understand the lab structure in a few minutes&lt;/LI&gt;
&lt;LI&gt;Set project endpoint and model deployment environment variables&lt;/LI&gt;
&lt;LI&gt;Run the host locally and validate the responses endpoint&lt;/LI&gt;
&lt;LI&gt;Inspect the deterministic tools in WorkshopLab.Core&lt;/LI&gt;
&lt;LI&gt;Run tests and see how behaviour changes are verified&lt;/LI&gt;
&lt;LI&gt;Review the deployment path so local work maps to Azure steps&lt;/LI&gt;
&lt;LI&gt;Understand how the UI validates end-to-end behaviour after deployment&lt;/LI&gt;
&lt;LI&gt;Leave the first session with a working baseline and a clear next step&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;That first checkpoint is important. Once you see a working loop on your own machine, the rest of the workshop becomes much easier to finish.&lt;/P&gt;
&lt;/SECTION&gt;
&lt;SECTION&gt;
&lt;H2&gt;Using Copilot and MCP in the Workflow&lt;/H2&gt;
&lt;P&gt;This lab emphasises prompt-based development patterns that help you move faster while still learning the underlying architecture. You are not only writing code, you are learning to describe intent clearly, inspect generated output, and iterate with discipline.&lt;/P&gt;
&lt;P&gt;Copilot supports implementation and review in the coding labs. MCP appears as a practical deployment option for hosted-agent lifecycle actions, provided your tools are authenticated to the correct tenant and project context.&lt;/P&gt;
&lt;P&gt;Together, this creates a development rhythm that is especially useful for learning:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Define intent with clear prompts&lt;/LI&gt;
&lt;LI&gt;Generate or adjust implementation details&lt;/LI&gt;
&lt;LI&gt;Validate behaviour through tests and UI checks&lt;/LI&gt;
&lt;LI&gt;Deploy and observe outcomes in Azure&lt;/LI&gt;
&lt;LI&gt;Refine based on evidence, not guesswork&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;That same rhythm transfers well to real projects. Even if your production environment differs, the patterns from this workshop are adaptable.&lt;/P&gt;
&lt;/SECTION&gt;
&lt;SECTION&gt;
&lt;H2&gt;Production-Minded Tips&lt;/H2&gt;
&lt;P&gt;As you complete the lab, keep a production mindset from day one:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Reliability: keep deterministic logic small, testable, and explicit&lt;/LI&gt;
&lt;LI&gt;Security: Treat secrets, identity, and access boundaries as first-class concerns&lt;/LI&gt;
&lt;LI&gt;Observability: use telemetry and status checks to speed up debugging&lt;/LI&gt;
&lt;LI&gt;Governance: keep deployment steps explicit so teams can review and repeat them&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;You do not need to solve everything in one pass. The goal is to build habits that make your agent projects safer and easier to evolve.&lt;/P&gt;
&lt;/SECTION&gt;
&lt;SECTION&gt;
&lt;H2&gt;Start Today:&lt;/H2&gt;
&lt;P&gt;If you have been waiting for the right time to move from “interesting demo” to “practical implementation”, this is the moment. The workshop is structured for self-study, and the steps are designed to keep your momentum high.&lt;/P&gt;
&lt;P&gt;Start here: &lt;A href="https://github.com/microsoft/Hosted_Agents_Workshop_Lab" target="_blank" rel="noopener"&gt;https://github.com/microsoft/Hosted_Agents_Workshop_Lab&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Want deeper documentation while you go? These official guides are great companions:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/foundry/agents/quickstarts/quickstart-hosted-agent" target="_blank" rel="noopener"&gt;Hosted agent quickstart&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/foundry/agents/how-to/deploy-hosted-agent" target="_blank" rel="noopener"&gt;Hosted agent deployment guide&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;When you finish, share what you built. Post a screenshot or short write-up in a GitHub issue/discussion, on social, or in comments with one lesson learned. Your example can help the next developer get unstuck faster.&lt;/P&gt;
&lt;H3&gt;Copy/Paste Progress Checklist&lt;/H3&gt;
&lt;PRE&gt;&lt;CODE&gt;[ ] Clone the workshop repo
[ ] Complete local setup and run the agent
[ ] Make one prompt-based behaviour change
[ ] Validate with tests and chat UI
[ ] Run CI checks
[ ] Provision and deploy via Azure and Foundry workflow
[ ] Review observability signals and refine
[ ] Share what I built + one takeaway&lt;/CODE&gt;&lt;/PRE&gt;
&lt;/SECTION&gt;
&lt;SECTION&gt;
&lt;H2&gt;Common Questions&lt;/H2&gt;
&lt;H3&gt;How long does it take?&lt;/H3&gt;
&lt;P&gt;Most developers can complete a meaningful pass in a few focused sessions of 60-75 mins. You can get the first local success quickly, then continue through deployment and refinement at your own pace.&lt;/P&gt;
&lt;H3&gt;Do I need an Azure subscription?&lt;/H3&gt;
&lt;P&gt;Yes, for provisioning and deployment steps. You can still begin local development and testing before completing all Azure activities.&lt;/P&gt;
&lt;H3&gt;Is it beginner-friendly?&lt;/H3&gt;
&lt;P&gt;Yes. The labs are written for beginners, run in sequence, and include expected outcomes for each stage.&lt;/P&gt;
&lt;H3&gt;Can I adapt it beyond .NET?&lt;/H3&gt;
&lt;P&gt;Yes. The implementation in this workshop is .NET 10, but the architecture and development patterns can be adapted to other stacks.&lt;/P&gt;
&lt;H3&gt;What if I am evaluating for a team?&lt;/H3&gt;
&lt;P&gt;This lab is a strong team evaluation asset because it demonstrates end-to-end flow: local dev, integration patterns, CI, secure deployment, and operational visibility.&lt;/P&gt;
&lt;/SECTION&gt;
&lt;SECTION&gt;&lt;/SECTION&gt;
&lt;SECTION&gt;&lt;/SECTION&gt;
&lt;SECTION&gt;
&lt;H2&gt;Closing&lt;/H2&gt;
&lt;P&gt;This workshop gives you more than theory. It gives you a practical path from first local run to deployed hosted agent, backed by tests, CI, and a user-facing UI validation loop. If you want a build-first route into Microsoft Foundry hosted-agent development, this is an excellent place to start.&lt;/P&gt;
&lt;P&gt;Begin now: &lt;A href="https://github.com/microsoft/Hosted_Agents_Workshop_Lab" target="_blank" rel="noopener"&gt;https://github.com/microsoft/Hosted_Agents_Workshop_Lab&lt;/A&gt;&lt;/P&gt;
&lt;/SECTION&gt;
&lt;/ARTICLE&gt;</description>
      <pubDate>Fri, 03 Apr 2026 11:25:45 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/educator-developer-blog/build-and-deploy-a-microsoft-foundry-hosted-agent-a-hands-on/ba-p/4508426</guid>
      <dc:creator>Lee_Stott</dc:creator>
      <dc:date>2026-04-03T11:25:45Z</dc:date>
    </item>
    <item>
      <title>Looking for official role-based AI learning paths and Microsoft AI ecosystem diagram</title>
      <link>https://techcommunity.microsoft.com/t5/education/looking-for-official-role-based-ai-learning-paths-and-microsoft/m-p/4507877#M902</link>
      <description>&lt;P&gt;Hello everyone,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am responsible for AI up-skilling at my company, and we are currently building role-based learning paths for roles such as AI Engineer, Data Analyst, Data Engineer, and Data Scientist.&lt;/P&gt;&lt;P&gt;I would really appreciate any advice or pointers to official Microsoft resources on the following topics.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Q1. Role-based learning paths&lt;/P&gt;&lt;P&gt;I am aware of the Microsoft Learn career paths:&lt;/P&gt;&lt;P&gt;However, I am looking for the most up-to-date official learning paths or curated guidance that also cover newer services such as:&lt;/P&gt;&lt;P&gt;Copilot&lt;/P&gt;&lt;P&gt;GitHub Copilot&lt;/P&gt;&lt;P&gt;Microsoft Fabric&lt;/P&gt;&lt;P&gt;Azure AI Foundry&lt;/P&gt;&lt;P&gt;Are there any Microsoft resources that organize recommended learning content by role for these newer areas?&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Q2. Official Microsoft AI ecosystem diagram&lt;/P&gt;&lt;P&gt;I am also looking for an official Microsoft diagram, map, or architecture overview that shows the overall AI ecosystem, including services such as Copilot, GitHub Copilot, Microsoft Fabric, and Azure AI Foundry.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As a reference, I am aware of unofficial resource, although it appears to be somewhat outdated:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If anyone knows of an official and more recent resource, I would be very grateful.&lt;/P&gt;&lt;P&gt;(If direct links are not allowed in replies, page titles or document names would also be very helpful.)&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Thank you.&lt;/P&gt;</description>
      <pubDate>Thu, 02 Apr 2026 04:10:42 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/education/looking-for-official-role-based-ai-learning-paths-and-microsoft/m-p/4507877#M902</guid>
      <dc:creator>smatsusaki</dc:creator>
      <dc:date>2026-04-02T04:10:42Z</dc:date>
    </item>
    <item>
      <title>Imagine Cup 2026 Semifinalist: Builder Series Judges</title>
      <link>https://techcommunity.microsoft.com/t5/student-developer-blog/imagine-cup-2026-semifinalist-builder-series-judges/ba-p/4507771</link>
      <description>&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;With submissions closing on January 9, selected startups advance into the semifinals and step into this experience. From meeting their mentors to participating in build labs and pitch clinics, founders sharpen their product, their story, and their readiness for the global stage.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559685&amp;quot;:0,&amp;quot;335559737&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:279}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;In the semifinals,&amp;nbsp;startups&amp;nbsp;present&amp;nbsp;live&amp;nbsp;and step into the next level of the competition.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559685&amp;quot;:0,&amp;quot;335559737&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:279}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;They pitch in front of a panel of AI experts, startup founders, and investors, each bringing real-world experience in building, scaling, and backing technology. Through live Q&amp;amp;A and direct feedback,&amp;nbsp;startups&amp;nbsp;gain insight that&amp;nbsp;challenges&amp;nbsp;their thinking,&amp;nbsp;strengthens&amp;nbsp;their approach, and&amp;nbsp;helps&amp;nbsp;move their solution forward.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Meet the semifinals judges (listed in alphabetical order):&lt;/SPAN&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;BR /&gt;&amp;nbsp;&lt;/H4&gt;
&lt;img&gt;Mike Abbott&lt;/img&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;&lt;A href="https://www.linkedin.com/in/mike-abbott/" target="_blank" rel="noopener"&gt;Mike Abbott&lt;/A&gt;&lt;/STRONG&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;is a Partner at Antler, co-leading its Australian operations and backing founders from day zero through scale. With a background in equity capital markets and&amp;nbsp;M&amp;amp;A, he was an early Uber leader in Asia and later Head of Operations for Australia and New Zealand, helping scale the business from a small team to a multi-billion-dollar operation. As cofounder of Kaddy, a B2B marketplace&amp;nbsp;acquired&amp;nbsp;within three years, he brings deep experience in building, scaling, and investing in startups.&lt;/SPAN&gt;&amp;nbsp;&lt;BR /&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img&gt;Todd Anglin&lt;/img&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/toddanglin/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="auto"&gt;Todd Anglin&lt;/SPAN&gt;&lt;/A&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;is a Partner Developer Relations Lead at Microsoft with&amp;nbsp;proven&amp;nbsp;experience&amp;nbsp;building and scaling high-performing teams. With a background spanning web and mobile development, cloud native platforms, and low code tools, he has led product, developer relations, and go-to-market efforts across growing technology companies. Known for his strength in communication, he brings the ability to translate complex technical concepts for any audience while helping teams move quickly and build with impact.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img&gt;Rania Awad&lt;/img&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/rania-awad77/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="auto"&gt;Rania Awad&lt;/SPAN&gt;&lt;/A&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;is Chief Strategy Officer at Helfie.AI and a strategic leader at the intersection of AI, healthcare, and digital transformation. With experience across SaaS,&amp;nbsp;health tech, and global digital businesses, she has led high-impact initiatives that turn bold ideas into scalable outcomes. Known for her cross-functional leadership and strong commercial lens, she brings a focus on connecting strategy to execution to drive meaningful impact.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img&gt;Rick Clause&lt;/img&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/rickclaus/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="auto"&gt;Rick Claus&lt;/SPAN&gt;&lt;/A&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;is a Cloud Advocate Team Lead at Microsoft with over 25 years of experience in the IT industry. As part of the Developer Relations Cloud Advocacy team, he focuses on enabling cross-team collaboration and engaging global technical communities around Azure and hybrid cloud solutions. With a background in enterprise architecture, virtualization, and technical training, he brings deep&amp;nbsp;expertise&amp;nbsp;in connecting product, engineering, and technical audiences to improve the overall customer experience.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img&gt;Sonia Cuff&lt;/img&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/soniacuff/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="auto"&gt;Sonia Cuff&lt;/SPAN&gt;&lt;/A&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;leads the Cloud Native &amp;amp; Linux team inside Microsoft's Developer Relations division, connecting with technical communities worldwide. She has over 30 years’ experience in tech, from large enterprises and government to small businesses and partners. Sonia is passionate about the connection between technology and business.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img&gt;Mal Filipowska&lt;/img&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/malgorzatafilipowska/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="auto"&gt;Małgorzata (Mal) Filipowska &lt;/SPAN&gt;&lt;/A&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;is a venture capitalist with&amp;nbsp;nearly a&amp;nbsp;decade of experience investing in early-stage companies across emerging markets. As part of Seedstars International Ventures, a fund backed by global institutions including the World Bank, Rockefeller Foundation, and Visa Foundation, she manages a portfolio of over 130 companies across 40 countries, supporting founders across diverse and high-growth markets. She brings deep insight into scaling startups in these regions and a strong perspective on early-stage growth.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img&gt;Alexandra Miele&lt;/img&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/alexmiele/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="auto"&gt;Alexandra Miele&lt;/SPAN&gt;&lt;/A&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;leads Platform at HOF Capital, where she drives portfolio engagement and builds strategic partnerships across the firm’s global network. With experience spanning venture, private capital, and institutional investing, she previously served as a Partner at a family office managing a $1B+ portfolio and held leadership roles at Rockefeller Capital Management and Goldman Sachs. She brings deep insight into alternative investments, growth&amp;nbsp;strategy, and supporting companies from early&amp;nbsp;stage&amp;nbsp;through scale.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img&gt;Nigel Parker&lt;/img&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/nigel-parker/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="auto"&gt;Nigel Parker&lt;/SPAN&gt;&lt;/A&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;is a technology leader with over 30 years of experience across cloud, data platforms, machine learning, and AI. Having led global engineering and architecture teams, including serving as Chief Engineer for Microsoft Asia Commercial Software Engineering, he co-founded&amp;nbsp;Vivara, an AI-driven wellbeing platform and works as a Data &amp;amp; AI consultant at&amp;nbsp;Arinco&amp;nbsp;(The Artificial Intelligence Company). He brings deep&amp;nbsp;expertise&amp;nbsp;in building scalable systems, integrating AI, and designing technology with a strong focus on human outcomes.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img&gt;Sarah Thiam&lt;/img&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/sarahthiam/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="auto"&gt;Sarah Thiam&lt;/SPAN&gt;&lt;/A&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;is the founder and CEO of&amp;nbsp;Germina&amp;nbsp;Labs,&amp;nbsp;a&amp;nbsp;AI x Web3 studio focused on developer-facing products, programs and tooling. With a product and developer relations background at Microsoft, Protocol&amp;nbsp;Labs&amp;nbsp;and the Singapore government, she brings a well-rounded perspective to scaling technical ecosystems.&lt;/SPAN&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;&amp;nbsp;&lt;/H4&gt;
&lt;H4&gt;&amp;nbsp;&lt;/H4&gt;
&lt;img&gt;Zhen Li&lt;/img&gt;
&lt;P&gt;&lt;SPAN data-olk-copy-source="MessageBody"&gt;&lt;STRONG&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/zhenthebuilder/" target="_blank"&gt;Zhen Li&lt;/A&gt;&lt;/STRONG&gt; created Replit Agent and leads the AI team at Replit, building AI agents that turn ideas into real products. With experience building startups and AI agent, He brings deep expertise in developing intelligent tools that accelerate how software is built and shipped.&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Up next&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:210,&amp;quot;335559739&amp;quot;:210}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The top three&amp;nbsp;startups&amp;nbsp;will&amp;nbsp;advance to the World Championship, where they will compete on the global stage for the title and a $100,000 USD prize, along with a mentorship session with Satya Nadella,&amp;nbsp;Chairman&amp;nbsp;and&amp;nbsp;CEO&amp;nbsp;of Microsoft.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:210,&amp;quot;335559739&amp;quot;:210}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This is where&amp;nbsp;everything comes together, as&amp;nbsp;startups&amp;nbsp;step forward to showcase what they have built and how they are ready to scale.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559685&amp;quot;:0,&amp;quot;335559737&amp;quot;:0,&amp;quot;335559738&amp;quot;:210,&amp;quot;335559739&amp;quot;:210,&amp;quot;335559740&amp;quot;:279}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Follow along on &lt;A class="lia-external-url" href="https://www.instagram.com/microsoftimaginecup/" target="_blank" rel="noopener"&gt;Instagram&lt;/A&gt;, &lt;A class="lia-external-url" href="https://www.linkedin.com/showcase/microsoft-imagine-cup" target="_blank" rel="noopener"&gt;LinkedIn&lt;/A&gt;,&amp;nbsp;&lt;A class="lia-external-url" href="https://x.com/MSFTImagine" target="_blank" rel="noopener"&gt;X&lt;/A&gt;&amp;nbsp;and &lt;A class="lia-external-url" href="https://www.facebook.com/MSFTImagine" target="_blank" rel="noopener"&gt;Facebook&lt;/A&gt; for the latest updates,&amp;nbsp;startups&amp;nbsp;announcements, and moments leading up to the World Championship.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559685&amp;quot;:0,&amp;quot;335559737&amp;quot;:0,&amp;quot;335559738&amp;quot;:210,&amp;quot;335559739&amp;quot;:210,&amp;quot;335559740&amp;quot;:279}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 09 Apr 2026 19:03:20 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/student-developer-blog/imagine-cup-2026-semifinalist-builder-series-judges/ba-p/4507771</guid>
      <dc:creator>StudentDeveloperTeam</dc:creator>
      <dc:date>2026-04-09T19:03:20Z</dc:date>
    </item>
    <item>
      <title>Getting Started with Foundry Local: A Student Guide to the Microsoft Foundry Local Lab</title>
      <link>https://techcommunity.microsoft.com/t5/educator-developer-blog/getting-started-with-foundry-local-a-student-guide-to-the/ba-p/4503604</link>
      <description>&lt;P&gt;If you want to start building AI applications on your own machine, the&amp;nbsp;&lt;A href="https://github.com/microsoft-foundry/foundry-local-lab" target="_blank" rel="noopener"&gt;Microsoft Foundry Local Lab&lt;/A&gt; is one of the most useful places to begin. It is a practical workshop that takes you from first-time setup through to agents, retrieval, evaluation, speech transcription, tool calling, and a browser-based interface. The material is hands-on, cross-language, and designed to show how modern AI apps can run locally rather than depending on a cloud service for every step.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;This blog post is aimed at students, self-taught developers, and anyone learning how AI applications are put together in practice. Instead of treating large language models as a black box, the lab shows you how to install and manage local models, connect to them with code, structure tasks into workflows, and test whether the results are actually good enough. If you have been looking for a learning path that feels more like building real software and less like copying isolated snippets, this workshop is a strong starting point.&lt;/P&gt;
&lt;H2&gt;What Is Foundry Local?&lt;/H2&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://foundrylocal.ai" target="_blank" rel="noopener"&gt;Foundry Local&lt;/A&gt; is a local runtime for downloading, managing, and serving AI models on your own hardware. It exposes an OpenAI-compatible interface, which means you can work with familiar SDK patterns while keeping execution on your device. For learners, that matters for three reasons. First, it lowers the barrier to experimentation because you can run projects without setting up a cloud account for every test. Second, it helps you understand the moving parts behind AI applications, including model lifecycle, local inference, and application architecture. Third, it encourages privacy-aware development because the examples are designed to keep data on the machine wherever possible.&lt;/P&gt;
&lt;P&gt;The Foundry Local Lab uses that local-first approach to teach the full journey from simple prompts to multi-agent systems. It includes examples in Python, JavaScript, and C#, so you can follow the language that fits your course, your existing skills, or the platform you want to build on.&lt;/P&gt;
&lt;H2&gt;Why This Lab Works Well for Learners&lt;/H2&gt;
&lt;P&gt;A lot of AI tutorials stop at the moment a model replies to a prompt. That is useful for a first demo, but it does not teach you how to build a proper application. The Foundry Local Lab goes further. It is organised as a sequence of parts, each one adding a new idea and giving you working code to explore. You do not just ask a model to respond. You learn how to manage the service, choose a language SDK, construct retrieval pipelines, build agents, evaluate outputs, and expose the result through a usable interface.&lt;/P&gt;
&lt;P&gt;That sequence is especially helpful for students because the parts build on each other. Early labs focus on confidence and setup. Middle labs focus on architecture and patterns. Later labs move into more advanced ideas that are common in real projects, such as tool calling, evaluation, and custom model packaging. By the end, you have seen not just what a local AI app looks like, but how its different layers fit together.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;H2&gt;Before You Start&lt;/H2&gt;
&lt;P&gt;The workshop expects a reasonably modern machine and at least one programming language environment. The core prerequisites are straightforward: install Foundry Local, clone the repository, and choose whether you want to work in Python, JavaScript, or C#. You do not need to master all three. In fact, most learners will get more value by picking one language first, completing the full path in that language, and only then comparing how the same patterns look elsewhere.&lt;/P&gt;
&lt;P&gt;If you are new to AI development, do not be put off by the number of parts. The early sections are accessible, and the later ones become much easier once you have completed the foundations. Think of the lab as a structured course rather than a single tutorial.&lt;/P&gt;
&lt;H2&gt;What You Learn in Each Lab &lt;A class="lia-external-url" href="https://github.com/microsoft-foundry/foundry-local-lab" target="_blank" rel="noopener"&gt;https://github.com/microsoft-foundry/foundry-local-lab&lt;/A&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H3&gt;Part 1: Getting Started with Foundry Local&lt;/H3&gt;
&lt;P&gt;The first part introduces the basics of Foundry Local and gets you up and running. You learn how to install the CLI, inspect the model catalogue, download a model, and run it locally. This part also introduces practical details such as model aliases and dynamic service ports, which are small but important pieces of real development work.&lt;/P&gt;
&lt;P&gt;For students, the value of this part is confidence. You prove that local inference works on your machine, you see how the service behaves, and you learn the operational basics before writing any application code. By the end of Part 1, you should understand what Foundry Local does, how to start it, and how local model serving fits into an application workflow.&lt;/P&gt;
&lt;H3&gt;Part 2: Foundry Local SDK Deep Dive&lt;/H3&gt;
&lt;P&gt;Once the CLI makes sense, the workshop moves into the SDK. This part explains why application developers often use the SDK instead of relying only on terminal commands. You learn how to manage the service programmatically, browse available models, control model download and loading, and understand model metadata such as aliases and hardware-aware selection.&lt;/P&gt;
&lt;P&gt;This is where learners start to move from using a tool to building with a platform. You begin to see the difference between running a model manually and integrating it into software. By the end of this section, you should understand the API surface you will use in your own projects and know how to bootstrap the SDK in Python, JavaScript, or C#.&lt;/P&gt;
&lt;H3&gt;Part 3: SDKs and APIs&lt;/H3&gt;
&lt;P&gt;Part 3 turns the SDK concepts into a working chat application. You connect code to the local inference server and use the OpenAI-compatible API for streaming chat completions. The lab includes examples in all three supported languages, which makes it especially useful if you are comparing ecosystems or learning how the same idea is expressed through different syntax and libraries.&lt;/P&gt;
&lt;P&gt;The key learning outcome here is not just that you can get a response from a model. It is that you understand the boundary between your application and the local model service. You learn how messages are structured, how streaming works, and how to write the sort of integration code that becomes the foundation for every later lab.&lt;/P&gt;
&lt;H3&gt;Part 4: Retrieval-Augmented Generation&lt;/H3&gt;
&lt;P&gt;This is where the workshop starts to feel like modern AI engineering rather than basic prompting. In the retrieval-augmented generation lab, you build a simple RAG pipeline that grounds answers in supplied data. You work with an in-memory knowledge base, apply retrieval logic, score matches, and compose prompts that include grounded context.&lt;/P&gt;
&lt;P&gt;For learners, this part is important because it demonstrates a core truth of AI app development: a model on its own is often not enough. Useful applications usually need access to documents, notes, or structured information. By the end of Part 4, you understand why retrieval matters, how to pass retrieved context into a prompt, and how a pipeline can make answers more relevant and reliable.&lt;/P&gt;
&lt;H3&gt;Part 5: Building AI Agents&lt;/H3&gt;
&lt;P&gt;Part 5 introduces the concept of an agent. Instead of a one-off prompt and response, you begin to define behaviour through system instructions, roles, and conversation state. The lab uses the ChatAgent pattern and the Microsoft Agent Framework to show how an agent can maintain a purpose, respond with a persona, and return structured output such as JSON.&lt;/P&gt;
&lt;P&gt;This part helps learners understand the difference between a raw model call and a reusable application component. You learn how to design instructions that shape behaviour, how multi-turn interaction differs from single prompts, and why structured output matters when an AI component has to work inside a broader system.&lt;/P&gt;
&lt;H3&gt;Part 6: Multi-Agent Workflows&lt;/H3&gt;
&lt;P&gt;Once a single agent makes sense, the workshop expands the idea into a multi-agent workflow. The example pipeline uses roles such as researcher, writer, and editor, with outputs passed from one stage to the next. You explore sequential orchestration, shared configuration, and feedback loops between specialised components.&lt;/P&gt;
&lt;P&gt;For students, this lab is a very clear introduction to decomposition. Instead of asking one model to do everything at once, you break a task into smaller responsibilities. That pattern is useful well beyond AI. By the end of Part 6, you should understand why teams build multi-agent systems, how hand-offs are structured, and what trade-offs appear when more components are added to a workflow.&lt;/P&gt;
&lt;H3&gt;Part 7: Zava Creative Writer Capstone Application&lt;/H3&gt;
&lt;P&gt;The Zava Creative Writer is the capstone project that brings the earlier ideas together into a more production-style application. It uses multiple specialised agents, structured JSON hand-offs, product catalogue search, streaming output, and evaluation-style feedback loops. Rather than showing an isolated feature, this part shows how separate patterns combine into a complete system.&lt;/P&gt;
&lt;P&gt;This is one of the most valuable parts of the workshop for learner developers because it narrows the gap between tutorial code and real application design. You can see how orchestration, agent roles, and practical interfaces fit together. By the end of Part 7, you should be able to recognise the architecture of a serious local AI app and understand how the earlier labs support it.&lt;/P&gt;
&lt;H3&gt;Part 8: Evaluation-Led Development&lt;/H3&gt;
&lt;P&gt;Many beginner AI projects stop once the output looks good once or twice. This lab teaches a much stronger habit: evaluation-led development. You work with golden datasets, rule-based checks, and LLM-as-judge scoring to compare prompt or agent variants systematically. The goal is to move from anecdotal testing to repeatable assessment.&lt;/P&gt;
&lt;P&gt;This matters enormously for students because evaluation is one of the clearest differences between a classroom demo and dependable software. By the end of Part 8, you should understand how to define success criteria, compare outputs at scale, and use evidence rather than intuition when improving an AI component.&lt;/P&gt;
&lt;H3&gt;Part 9: Voice Transcription with Whisper&lt;/H3&gt;
&lt;P&gt;Part 9 broadens the workshop beyond text generation by introducing speech-to-text with Whisper running locally. You use the Foundry Local SDK to download and load the model, then transcribe local audio files through the compatible API surface. The emphasis is on privacy-first processing, with audio kept on-device.&lt;/P&gt;
&lt;P&gt;This section is a useful reminder that local AI development is not limited to chatbots. Learners see how a different modality fits into the same ecosystem and how local execution supports sensitive workloads. By the end of this lab, you should understand the transcription flow, the relevant client methods, and how speech features can be integrated into broader applications.&lt;/P&gt;
&lt;H3&gt;Part 10: Using Custom or Hugging Face Models&lt;/H3&gt;
&lt;P&gt;After learning the standard path, the workshop shows how to work with custom or Hugging Face models. This includes compiling models into optimised ONNX format with ONNX Runtime GenAI, choosing hardware-specific options, applying quantisation strategies, creating configuration files, and adding compiled models to the Foundry Local cache.&lt;/P&gt;
&lt;P&gt;For learner developers, this part opens the door to model engineering rather than simple model consumption. You begin to understand that model choice, optimisation, and packaging affect performance and usability. By the end of Part 10, you should have a clearer picture of how models move from an external source into a runnable local setup and why deployment format matters.&lt;/P&gt;
&lt;H3&gt;Part 11: Tool Calling with Local Models&lt;/H3&gt;
&lt;P&gt;Tool calling is one of the most practical patterns in current AI development, and this lab covers it directly. You define tool schemas, allow the model to request function calls, handle the multi-turn interaction loop, execute the tools locally, and return results back to the model. The examples include practical scenarios such as weather and population tools.&lt;/P&gt;
&lt;P&gt;This lab teaches learners how to move beyond generation into action. A model is no longer limited to producing text. It can decide when external data or a function is needed and incorporate that result into a useful answer. By the end of Part 11, you should understand the tool-calling flow and how AI systems connect reasoning with deterministic software behaviour.&lt;/P&gt;
&lt;H3&gt;Part 12: Building a Web UI for the Zava Creative Writer&lt;/H3&gt;
&lt;P&gt;Part 12 adds a browser-based front end to the capstone application. You learn how to serve a shared interface from Python, JavaScript, or C#, stream updates to the browser, consume NDJSON with the Fetch API and ReadableStream, and show live agent status as content is produced in real time.&lt;/P&gt;
&lt;P&gt;This part is especially good for students who want to build portfolio projects. It turns backend orchestration into something visible and interactive. By the end of Part 12, you should understand how to connect a local AI backend to a web interface and how streaming changes the user experience compared with waiting for one final response.&lt;/P&gt;
&lt;H3&gt;Part 13: Workshop Complete&lt;/H3&gt;
&lt;P&gt;The final part is a summary and extension point. It reviews what you have built across the previous sections and suggests ways to continue. Although it is not a new technical lab in the same way as the earlier parts, it plays an important role in learning. It helps you consolidate the architecture, the terminology, and the development patterns you have encountered.&lt;/P&gt;
&lt;P&gt;For learners, reflection matters. By the end of Part 13, you should be able to describe the full stack of a local AI application, from model management to user interface, and identify which area you want to deepen next.&lt;/P&gt;
&lt;H2&gt;What Students Gain from the Full Workshop&lt;/H2&gt;
&lt;P&gt;Taken together, these labs do more than teach Foundry Local itself. They teach how AI applications are built. You learn operational basics such as model setup and service management. You learn application integration through SDKs and APIs. You learn system design through RAG, agents, multi-agent orchestration, and web interfaces. You learn engineering discipline through evaluation. You also see how text, speech, custom models, and tool calling all fit into one local-first development workflow.&lt;/P&gt;
&lt;P&gt;That breadth makes the workshop useful in several settings. A student can use it as a self-study path. A lecturer can use it as source material for practical sessions. A learner developer can use it to build portfolio pieces and to understand which AI patterns are worth learning next. Because the repository includes Python, JavaScript, and C#, it also works well for comparing how architectural ideas transfer across languages.&lt;/P&gt;
&lt;H2&gt;How to Approach the Lab as a Beginner&lt;/H2&gt;
&lt;P&gt;If you are starting from scratch, the best route is simple. Complete Parts 1 to 3 in your preferred language first. That gives you the essential setup and integration skills. Then move into Parts 4 to 6 to understand how AI application patterns are composed. After that, use Parts 7 and 8 to learn how larger systems and evaluation fit together. Finally, explore Parts 9 to 12 based on your interests, whether that is speech, tooling, model customisation, or front-end work.&lt;/P&gt;
&lt;P&gt;It is also worth keeping notes as you go. Record what each part adds to your understanding, what code files matter, and what assumptions each example makes. That habit will help you move from following the labs to adapting the patterns in your own projects.&lt;/P&gt;
&lt;H2&gt;Final Thoughts&lt;/H2&gt;
&lt;P&gt;The &lt;A class="lia-external-url" href="https://github.com/microsoft-foundry/foundry-local-lab" target="_blank" rel="noopener"&gt;Microsoft Foundry Local Lab&lt;/A&gt; is a strong introduction to local AI development because it treats learners like developers rather than spectators. You install, run, connect, orchestrate, evaluate, and present working systems. That makes it far more valuable than a short demo that only proves a model can answer a question.&lt;/P&gt;
&lt;P&gt;If you are a student or learner developer who wants to understand how AI applications are really built, this lab gives you a clear path. Start with the basics, pick one language, and work through the parts in order. By the time you finish, you will not just have used Foundry Local. You will have a practical foundation for building local AI applications with far more confidence and much better judgement.&lt;/P&gt;</description>
      <pubDate>Mon, 30 Mar 2026 07:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/educator-developer-blog/getting-started-with-foundry-local-a-student-guide-to-the/ba-p/4503604</guid>
      <dc:creator>Lee_Stott</dc:creator>
      <dc:date>2026-03-30T07:00:00Z</dc:date>
    </item>
    <item>
      <title>Microsoft Mesh Education Licensing</title>
      <link>https://techcommunity.microsoft.com/t5/education/microsoft-mesh-education-licensing/m-p/4505952#M901</link>
      <description>&lt;P&gt;Microsoft Education and Product Teams,&lt;/P&gt;&lt;P&gt;I am writing to advocate for the inclusion of &lt;STRONG&gt;Microsoft Mesh&lt;/STRONG&gt; (Immersive Spaces and Events) within the &lt;STRONG&gt;Microsoft 365 Education SKU family (A1, A3, and A5)&lt;/STRONG&gt;.&lt;/P&gt;&lt;P&gt;Currently, Mesh is available across nearly every commercial license family, from Teams Essentials to E5 Enterprise, but is explicitly &lt;STRONG&gt;excluded from Education tenants&lt;/STRONG&gt;. As documented in several Learn Q&amp;amp;A threads and service plan manifests, the MESH_IMMERSIVE_FOR_TEAMS service plan is simply not provisioned for EDU customers.&lt;/P&gt;&lt;P&gt;The current state is one of silent exclusion, creating several critical hurdles:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;STRONG&gt;Pedagogical&lt;/STRONG&gt;: Immersive technology is one of the most requested features for remote and hybrid learning to combat "Zoom fatigue" and increase student engagement. Education is a high-value use case for 3D immersion.&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Parity&lt;/STRONG&gt;: Universities and K-12 institutions on A5 licenses pay for "top-tier" features but are denied the innovative tools available to a "Business Basic" user. If you are a small business on a basic plan, you have Mesh. If you are a world-class University on A5, you are blocked. This isn't a "procurable" add-on; it is a licensing eligibility wall.&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Implementation&lt;/STRONG&gt;: Current Microsoft guidance suggests schools move to Business or Enterprise licensing to access Mesh. This is not a viable solution for institutions with thousands of users, complex compliance requirements, and student-data privacy frameworks built specifically around EDU SKUs.&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;We aren’t asking for a discount; we are asking for &lt;STRONG&gt;eligibility&lt;/STRONG&gt;. We urge the product team to:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Add the Mesh Immersive service plan to the A3 and A5 EDU license entitlements.&lt;/LI&gt;&lt;LI&gt;Provide a clear roadmap for when Education tenants can expect feature parity with Commercial tenants.&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Education should be the vanguard of immersive collaboration, not an afterthought. We would appreciate a formal update on when this licensing barrier will be removed.&lt;/P&gt;</description>
      <pubDate>Thu, 26 Mar 2026 14:26:32 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/education/microsoft-mesh-education-licensing/m-p/4505952#M901</guid>
      <dc:creator>abrooks1</dc:creator>
      <dc:date>2026-03-26T14:26:32Z</dc:date>
    </item>
    <item>
      <title>Unable to Setup Billing for new tenant account: Error code - 43881</title>
      <link>https://techcommunity.microsoft.com/t5/education/unable-to-setup-billing-for-new-tenant-account-error-code-43881/m-p/4505862#M900</link>
      <description>&lt;P&gt;I set up a Microsoft 365 Education tenant for a school in Uganda but received error code 43881 during billing verification. The tenant was created but A1 trial licenses were not attached. I have no chat or email support options available in the admin center.&amp;nbsp;Error code: 43881&lt;/P&gt;</description>
      <pubDate>Thu, 26 Mar 2026 09:29:54 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/education/unable-to-setup-billing-for-new-tenant-account-error-code-43881/m-p/4505862#M900</guid>
      <dc:creator>ealawode</dc:creator>
      <dc:date>2026-03-26T09:29:54Z</dc:date>
    </item>
    <item>
      <title>Langchain Multi-Agent Systems with Microsoft Agent Framework and Hosted Agents</title>
      <link>https://techcommunity.microsoft.com/t5/educator-developer-blog/langchain-multi-agent-systems-with-microsoft-agent-framework-and/ba-p/4504863</link>
      <description>&lt;P&gt;If you have been building AI agents with LangChain, you already know how powerful its tool and chain abstractions are. But when it comes to deploying those agents to production — with real infrastructure, managed identity, live web search, and container orchestration — you need something more.&lt;/P&gt;
&lt;P&gt;This post walks through how to combine &lt;STRONG&gt;LangChain&lt;/STRONG&gt; with the &lt;STRONG&gt;Microsoft Agent Framework&lt;/STRONG&gt; (&lt;CODE&gt;azure-ai-agents&lt;/CODE&gt;) and deploy the result as a &lt;STRONG&gt;Microsoft Foundry Hosted Agent&lt;/STRONG&gt;. We will build a multi-agent incident triage copilot that uses LangChain locally and seamlessly upgrades to cloud-hosted capabilities on Microsoft Foundry.&lt;/P&gt;
&lt;H2&gt;Why combine LangChain with Microsoft Agent Framework?&lt;/H2&gt;
&lt;P&gt;As a LangChain developer, you get excellent abstractions for building agents: the &lt;CODE&gt;@tool&lt;/CODE&gt; decorator, &lt;CODE&gt;RunnableLambda&lt;/CODE&gt; chains, and composable pipelines. But production deployment raises questions that LangChain alone does not answer:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Where do your agents run?&lt;/STRONG&gt; Containers, serverless, or managed infrastructure?&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;How do you add live web search or code execution?&lt;/STRONG&gt; Bing Grounding and Code Interpreter are not LangChain built-ins.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;How do you handle authentication?&lt;/STRONG&gt; Managed identity, API keys, or tokens?&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;How do you observe agents in production?&lt;/STRONG&gt; Distributed tracing across multiple agents?&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The Microsoft Agent Framework fills these gaps. It provides &lt;CODE&gt;AgentsClient&lt;/CODE&gt; for creating and managing agents on Microsoft Foundry, built-in tools like &lt;CODE&gt;BingGroundingTool&lt;/CODE&gt; and &lt;CODE&gt;CodeInterpreterTool&lt;/CODE&gt;, and a thread-based conversation model. Combined with Hosted Agents, you get a fully managed container runtime with health probes, auto-scaling, and the OpenAI Responses API protocol.&lt;/P&gt;
&lt;P&gt;The key insight: &lt;STRONG&gt;LangChain handles local logic and chain composition; the Microsoft Agent Framework handles cloud-hosted orchestration and tooling.&lt;/STRONG&gt;&lt;/P&gt;
&lt;H2&gt;Architecture overview&lt;/H2&gt;
&lt;P&gt;The incident triage copilot uses a coordinator pattern with three specialist agents:&lt;/P&gt;
&lt;P&gt;&lt;IMG src="https://raw.githubusercontent.com/leestott/hosted-agents-langchain-samples/main/screenshots/01-ui-homepage-foundry-connected.png" alt="UI Homepage showing Foundry connected status" /&gt;&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;User Query
    |
    v
Coordinator Agent
    |
    +--&amp;gt; LangChain Triage Chain    (routing decision)
    +--&amp;gt; LangChain Synthesis Chain  (combine results)
    |
    +---+---+---+
    |   |       |
    v   v       v
Research  Diagnostics  Remediation
 Agent      Agent        Agent&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;Each specialist agent has two execution modes:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Mode&lt;/th&gt;&lt;th&gt;LangChain Role&lt;/th&gt;&lt;th&gt;Microsoft Agent Framework Role&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;&lt;STRONG&gt;Local&lt;/STRONG&gt;&lt;/td&gt;&lt;td&gt;&lt;CODE&gt;@tool&lt;/CODE&gt; functions provide heuristic analysis&lt;/td&gt;&lt;td&gt;Not used&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;STRONG&gt;Foundry&lt;/STRONG&gt;&lt;/td&gt;&lt;td&gt;Chains handle routing and synthesis&lt;/td&gt;&lt;td&gt;&lt;CODE&gt;AgentsClient&lt;/CODE&gt; with &lt;CODE&gt;BingGroundingTool&lt;/CODE&gt;, &lt;CODE&gt;CodeInterpreterTool&lt;/CODE&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;This dual-mode design means you can develop and test locally with zero cloud dependencies, then deploy to Foundry for production capabilities.&lt;/P&gt;
&lt;H2&gt;Step 1: Define your LangChain tools&lt;/H2&gt;
&lt;P&gt;Start with what you know. Define typed, documented tools using LangChain’s &lt;CODE&gt;@tool&lt;/CODE&gt; decorator:&lt;/P&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;LI-CODE lang=""&gt;from langchain_core.tools import tool

@tool
def classify_incident_severity(query: str) -&amp;gt; str:
    """Classify the severity and priority of an incident based on keywords.

    Args:
        query: The incident description text.

    Returns:
        Severity classification with priority level.
    """
    query_lower = query.lower()

    critical_keywords = [
        "production down", "all users", "outage", "breach",
    ]
    high_keywords = [
        "503", "500", "timeout", "latency", "slow",
    ]

    if any(kw in query_lower for kw in critical_keywords):
        return "severity=critical, priority=P1"
    if any(kw in query_lower for kw in high_keywords):
        return "severity=high, priority=P2"
    return "severity=low, priority=P4"&lt;/LI-CODE&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;P&gt;These tools work identically in local mode and serve as fallbacks when Foundry is unavailable.&lt;/P&gt;
&lt;H2&gt;Step 2: Build routing with LangChain chains&lt;/H2&gt;
&lt;P&gt;Use &lt;CODE&gt;RunnableLambda&lt;/CODE&gt; to create a routing chain that classifies the incident and selects which specialists to invoke:&lt;/P&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;LI-CODE lang=""&gt;from langchain_core.runnables import RunnableLambda
from enum import Enum

class AgentRole(str, Enum):
    RESEARCH = "research"
    DIAGNOSTICS = "diagnostics"
    REMEDIATION = "remediation"

DIAGNOSTICS_KEYWORDS = {
    "log", "error", "exception", "timeout", "500", "503",
    "crash", "oom", "root cause",
}

REMEDIATION_KEYWORDS = {
    "fix", "remediate", "runbook", "rollback", "hotfix",
    "patch", "resolve", "action plan",
}

def _route(inputs: dict) -&amp;gt; dict:
    query = inputs["query"].lower()
    specialists = [AgentRole.RESEARCH]  # always included

    if any(kw in query for kw in DIAGNOSTICS_KEYWORDS):
        specialists.append(AgentRole.DIAGNOSTICS)

    if any(kw in query for kw in REMEDIATION_KEYWORDS):
        specialists.append(AgentRole.REMEDIATION)

    return {**inputs, "specialists": specialists}

triage_routing_chain = RunnableLambda(_route)&lt;/LI-CODE&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;P&gt;This is pure LangChain — no cloud dependency. The chain analyses the query and returns which specialists should handle it.&lt;/P&gt;
&lt;H2&gt;Step 3: Create specialist agents with dual-mode execution&lt;/H2&gt;
&lt;P&gt;Each specialist agent extends a base class. In local mode, it uses LangChain tools. In Foundry mode, it delegates to the Microsoft Agent Framework:&lt;/P&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;LI-CODE lang=""&gt;from abc import ABC, abstractmethod
from pathlib import Path

class BaseSpecialistAgent(ABC):
    role: AgentRole
    prompt_file: str

    def __init__(self):
        prompt_path = Path(__file__).parent.parent / "prompts" / self.prompt_file
        self.system_prompt = prompt_path.read_text(encoding="utf-8")

    async def run(self, query, shared_context, correlation_id, client=None):
        if client is not None:
            return await self._run_on_foundry(query, shared_context, correlation_id, client)
        return await self._run_locally(query, shared_context, correlation_id)

    async def _run_on_foundry(self, query, shared_context, correlation_id, client):
        """Use Microsoft Agent Framework for cloud-hosted execution."""
        from azure.ai.agents.models import BingGroundingTool

        agent = await client.agents.create_agent(
            model=shared_context.get("model_deployment", "gpt-4o"),
            name=f"{self.role.value}-{correlation_id}",
            instructions=self.system_prompt,
            tools=self._get_foundry_tools(shared_context),
        )

        thread = await client.agents.threads.create()
        await client.agents.messages.create(
            thread_id=thread.id,
            role="user",
            content=self._build_prompt(query, shared_context),
        )

        run = await client.agents.runs.create_and_process(
            thread_id=thread.id,
            agent_id=agent.id,
        )
        # Extract and return the agent’s response...

    async def _run_locally(self, query, shared_context, correlation_id):
        """Use LangChain tools for local heuristic analysis."""
        # Each subclass implements this with its specific tools
        ...&lt;/LI-CODE&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;P&gt;The key pattern here:&amp;nbsp;&lt;STRONG&gt;same interface, different backends&lt;/STRONG&gt;. Your coordinator does not care whether a specialist ran locally or on Foundry.&lt;/P&gt;
&lt;H2&gt;Step 4: Wire it up with FastAPI&lt;/H2&gt;
&lt;P&gt;Expose the multi-agent pipeline through a FastAPI endpoint. The &lt;CODE&gt;/triage&lt;/CODE&gt; endpoint accepts incident descriptions and returns structured reports:&lt;/P&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;LI-CODE lang=""&gt;from fastapi import FastAPI
from agents.coordinator import Coordinator
from models import TriageRequest

app = FastAPI(title="Incident Triage Copilot")
coordinator = Coordinator()

@app.post("/triage")
async def triage(request: TriageRequest):
    return await coordinator.triage(
        request=request,
        client=app.state.foundry_client,
        max_turns=10,
    )&lt;/LI-CODE&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;P&gt;The application also implements the&amp;nbsp;&lt;CODE&gt;/responses&lt;/CODE&gt; endpoint, which follows the OpenAI Responses API protocol. This is what Microsoft Foundry Hosted Agents expects when routing traffic to your container.&lt;/P&gt;
&lt;H2&gt;Step 5: Deploy as a Hosted Agent&lt;/H2&gt;
&lt;P&gt;This is where Microsoft Foundry Hosted Agents shines. Your multi-agent system becomes a managed, auto-scaling service with a single command:&lt;/P&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;LI-CODE lang=""&gt;# Install the azd AI agent extension
azd extension install azure.ai.agents

# Provision infrastructure and deploy
azd up&lt;/LI-CODE&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;P&gt;&lt;IMG src="https://raw.githubusercontent.com/leestott/hosted-agents-langchain-samples/main/screenshots/02-ui-triage-running.png" alt="Triage pipeline running with Research, Diagnostics, and Remediation agents" /&gt;&lt;/P&gt;
&lt;P&gt;The Azure Developer CLI (&lt;CODE&gt;azd&lt;/CODE&gt;) provisions everything:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure Container Registry&lt;/STRONG&gt; for your Docker image&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Container App&lt;/STRONG&gt; with health probes and auto-scaling&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;User-Assigned Managed Identity&lt;/STRONG&gt; for secure authentication&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Microsoft Foundry Hub and Project&lt;/STRONG&gt; with model deployments&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Application Insights&lt;/STRONG&gt; for distributed tracing&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Your &lt;CODE&gt;agent.yaml&lt;/CODE&gt; defines what tools the hosted agent has access to:&lt;/P&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;LI-CODE lang=""&gt;name: incident-triage-copilot-langchain
kind: hosted
model:
  deployment: gpt-4o
identity:
  type: managed
tools:
  - type: bing_grounding
    enabled: true
  - type: code_interpreter
    enabled: true&lt;/LI-CODE&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;H2&gt;What you gain over pure LangChain&lt;/H2&gt;
&lt;P&gt;&lt;IMG src="https://raw.githubusercontent.com/leestott/hosted-agents-langchain-samples/main/screenshots/03-ui-triage-report.png" alt="Triage report showing coordinator summary and specialist results" /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Capability&lt;/th&gt;&lt;th&gt;LangChain Only&lt;/th&gt;&lt;th&gt;LangChain + Microsoft Agent Framework&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;Local development&lt;/td&gt;&lt;td&gt;Yes&lt;/td&gt;&lt;td&gt;Yes (identical experience)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Live web search&lt;/td&gt;&lt;td&gt;Requires custom integration&lt;/td&gt;&lt;td&gt;Built-in &lt;CODE&gt;BingGroundingTool&lt;/CODE&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Code execution&lt;/td&gt;&lt;td&gt;Requires sandboxing&lt;/td&gt;&lt;td&gt;Built-in &lt;CODE&gt;CodeInterpreterTool&lt;/CODE&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Managed hosting&lt;/td&gt;&lt;td&gt;DIY containers&lt;/td&gt;&lt;td&gt;Foundry Hosted Agents&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Authentication&lt;/td&gt;&lt;td&gt;DIY&lt;/td&gt;&lt;td&gt;Managed Identity (zero secrets)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Observability&lt;/td&gt;&lt;td&gt;DIY&lt;/td&gt;&lt;td&gt;OpenTelemetry + Application Insights&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;One-command deploy&lt;/td&gt;&lt;td&gt;No&lt;/td&gt;&lt;td&gt;&lt;CODE&gt;azd up&lt;/CODE&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H2&gt;Testing locally&lt;/H2&gt;
&lt;P&gt;The dual-mode architecture means you can test the full pipeline without any cloud resources:&lt;/P&gt;
&lt;P&gt;&lt;IMG src="https://raw.githubusercontent.com/leestott/hosted-agents-langchain-samples/main/screenshots/04-ui-specialist-agents.png" alt="Research Agent with Bing Grounding and Diagnostics Agent with Code Interpreter" /&gt;&lt;/P&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;LI-CODE lang=""&gt;# Create virtual environment and install dependencies
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

# Run locally (agents use LangChain tools)
python -m src&lt;/LI-CODE&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;P&gt;Then open &lt;CODE&gt;http://localhost:8080&lt;/CODE&gt; in your browser to use the built-in web UI, or call the API directly:&lt;/P&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;LI-CODE lang=""&gt;curl -X POST http://localhost:8080/triage \
  -H "Content-Type: application/json" \
  -d '{"message": "Getting 503 errors on /api/orders since 2pm"}'&lt;/LI-CODE&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;P&gt;The response includes a coordinator summary, specialist results with confidence scores, and the tools each agent used.&lt;/P&gt;
&lt;H2&gt;Running the tests&lt;/H2&gt;
&lt;P&gt;The project includes a comprehensive test suite covering routing logic, tool behaviour, agent execution, and HTTP endpoints:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;curl -X POST http://localhost:8080/triage \
  -H "Content-Type: application/json" \
  -d '{"message": "Getting 503 errors on /api/orders since 2pm"}'&lt;/LI-CODE&gt;
&lt;P&gt;Tests run entirely in local mode, so no cloud credentials are needed.&lt;/P&gt;
&lt;H2&gt;Key takeaways for LangChain developers&lt;/H2&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Keep your LangChain abstractions.&lt;/STRONG&gt; The &lt;CODE&gt;@tool&lt;/CODE&gt; decorator, &lt;CODE&gt;RunnableLambda&lt;/CODE&gt; chains, and composable pipelines all work exactly as you expect.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Add cloud capabilities incrementally.&lt;/STRONG&gt; Start local, then enable Bing Grounding, Code Interpreter, and managed hosting when you are ready.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Use the dual-mode pattern.&lt;/STRONG&gt; Every agent should work locally with LangChain tools and on Foundry with the Microsoft Agent Framework. This makes development fast and deployment seamless.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Let &lt;CODE&gt;azd&lt;/CODE&gt; handle infrastructure.&lt;/STRONG&gt; One command provisions everything: containers, identity, monitoring, and model deployments.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Security comes free.&lt;/STRONG&gt; Managed Identity means no API keys in your code. Non-root containers, RBAC, and disabled ACR admin are all configured by default.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;Get started&lt;/H2&gt;
&lt;P&gt;Clone the sample repository and try it yourself:&lt;/P&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;LI-CODE lang=""&gt;git clone https://github.com/leestott/hosted-agents-langchain-samples
cd hosted-agents-langchain-samples
python -m venv .venv &amp;amp;&amp;amp; source .venv/bin/activate
pip install -r requirements.txt
python -m src&lt;/LI-CODE&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;P&gt;Open&amp;nbsp;&lt;CODE&gt;http://localhost:8080&lt;/CODE&gt; to interact with the copilot through the web UI. When you are ready for production, run &lt;CODE&gt;azd up&lt;/CODE&gt; and your multi-agent system is live on Microsoft Foundry.&lt;/P&gt;
&lt;H2&gt;Resources&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/azure/ai-services/agents/" target="_blank"&gt;Microsoft Agent Framework for Python documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/azure/ai-services/agents/concepts/hosted-agents" target="_blank"&gt;Microsoft Foundry Hosted Agents&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/azure/developer/azure-developer-cli/" target="_blank"&gt;Azure Developer CLI (azd)&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://python.langchain.com/" target="_blank"&gt;LangChain documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/azure/ai-foundry/" target="_blank"&gt;Microsoft Foundry documentation&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Thu, 26 Mar 2026 07:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/educator-developer-blog/langchain-multi-agent-systems-with-microsoft-agent-framework-and/ba-p/4504863</guid>
      <dc:creator>Lee_Stott</dc:creator>
      <dc:date>2026-03-26T07:00:00Z</dc:date>
    </item>
    <item>
      <title>Build an Offline Hybrid RAG Stack with ONNX and Foundry Local</title>
      <link>https://techcommunity.microsoft.com/t5/educator-developer-blog/build-an-offline-hybrid-rag-stack-with-onnx-and-foundry-local/ba-p/4503589</link>
      <description>&lt;MAIN&gt;
&lt;ARTICLE&gt;&lt;HEADER&gt;
&lt;P class="lead"&gt;If you are building local AI applications, basic retrieval augmented generation is often only the starting point. This sample shows a more practical pattern: combine lexical retrieval, ONNX based semantic embeddings, and a Foundry Local chat model so the assistant stays grounded, remains offline, and degrades cleanly when the semantic path is unavailable.&lt;/P&gt;
&lt;/HEADER&gt;
&lt;SECTION&gt;
&lt;H2&gt;Why this sample is worth studying&lt;/H2&gt;
&lt;P&gt;Many local RAG samples rely on a single retrieval strategy. That is usually enough for a proof of concept, but it breaks down quickly in production. Exact keywords, acronyms, and document codes behave differently from natural language questions and paraphrased requests.&lt;/P&gt;
&lt;P&gt;This repository keeps the original lexical retrieval path, adds local ONNX embeddings for semantic search, and fuses both signals in a hybrid ranking mode. The generation step runs through Foundry Local, so the entire assistant can remain on device.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Lexical mode handles exact terms and structured vocabulary.&lt;/LI&gt;
&lt;LI&gt;Semantic mode handles paraphrases and more natural language phrasing.&lt;/LI&gt;
&lt;LI&gt;Hybrid mode combines both and is usually the best default.&lt;/LI&gt;
&lt;LI&gt;Lexical fallback protects the user experience if the embedding pipeline cannot start.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/SECTION&gt;
&lt;SECTION&gt;
&lt;H2&gt;Architectural overview&lt;/H2&gt;
&lt;P&gt;The sample has two main flows: an offline ingestion pipeline and a local query pipeline.&lt;/P&gt;
&lt;FIGURE&gt;&lt;IMG src="https://raw.githubusercontent.com/leestott/local-hybrid-retrival-onnx/main/screenshots/07-architecture-diagram.png" alt="Architecture diagram showing the ingestion pipeline and local query pipeline" /&gt;
&lt;FIGCAPTION&gt;The architecture splits cleanly into offline ingestion at the top and runtime query handling at the bottom.&lt;/FIGCAPTION&gt;
&lt;/FIGURE&gt;
&lt;H3&gt;Offline ingestion pipeline&lt;/H3&gt;
&lt;OL&gt;
&lt;LI&gt;Read Markdown files from &lt;CODE&gt;docs/&lt;/CODE&gt;.&lt;/LI&gt;
&lt;LI&gt;Parse front matter and split each document into overlapping chunks.&lt;/LI&gt;
&lt;LI&gt;Generate dense embeddings when the ONNX model is available.&lt;/LI&gt;
&lt;LI&gt;Store chunks in SQLite with both sparse lexical features and optional dense vectors.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H3&gt;Local query pipeline&lt;/H3&gt;
&lt;OL&gt;
&lt;LI&gt;The browser posts a question to the Express API.&lt;/LI&gt;
&lt;LI&gt;&lt;CODE&gt;ChatEngine&lt;/CODE&gt; resolves the requested retrieval mode.&lt;/LI&gt;
&lt;LI&gt;&lt;CODE&gt;VectorStore&lt;/CODE&gt; retrieves lexical, semantic, or hybrid results.&lt;/LI&gt;
&lt;LI&gt;The prompt is assembled with the retrieved context and sent to a Foundry Local chat model.&lt;/LI&gt;
&lt;LI&gt;The answer is returned with source references and retrieval metadata.&lt;/LI&gt;
&lt;/OL&gt;
&lt;FIGURE&gt;&lt;IMG src="https://raw.githubusercontent.com/leestott/local-hybrid-retrival-onnx/main/screenshots/08-rag-flow-sequence.png" alt="Sequence diagram showing lexical and hybrid retrieval flow" /&gt;
&lt;FIGCAPTION&gt;The sequence diagram shows the difference between lexical retrieval and hybrid retrieval. In hybrid mode, the query is embedded first, then lexical and semantic scores are fused before prompt assembly.&lt;/FIGCAPTION&gt;
&lt;/FIGURE&gt;
&lt;/SECTION&gt;
&lt;SECTION&gt;
&lt;H2&gt;Repository structure and core components&lt;/H2&gt;
&lt;P&gt;The implementation is compact and readable. The main files to understand are listed below.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;CODE&gt;src/config.js&lt;/CODE&gt;: retrieval defaults, paths, and model settings.&lt;/LI&gt;
&lt;LI&gt;&lt;CODE&gt;src/embeddingEngine.js&lt;/CODE&gt;: local ONNX embedding generation through Transformers.js.&lt;/LI&gt;
&lt;LI&gt;&lt;CODE&gt;src/vectorStore.js&lt;/CODE&gt;: SQLite storage plus lexical, semantic, and hybrid ranking.&lt;/LI&gt;
&lt;LI&gt;&lt;CODE&gt;src/chatEngine.js&lt;/CODE&gt;: retrieval mode resolution, prompt assembly, and Foundry Local model execution.&lt;/LI&gt;
&lt;LI&gt;&lt;CODE&gt;src/ingest.js&lt;/CODE&gt;: document ingestion and embedding generation during indexing.&lt;/LI&gt;
&lt;LI&gt;&lt;CODE&gt;src/server.js&lt;/CODE&gt;: REST endpoints, streaming endpoints, upload support, and health reporting.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/SECTION&gt;
&lt;SECTION&gt;
&lt;H2&gt;Getting started&lt;/H2&gt;
&lt;P&gt;To run the sample, you need Node.js 20 or newer, Foundry Local, and a local ONNX embedding model. The default model path is &lt;CODE&gt;models/embeddings/bge-small-en-v1.5&lt;/CODE&gt;.&lt;/P&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;LI-CODE lang=""&gt;cd c:\Users\leestott\local-hybrid-retrival-onnx 
npm install huggingface-cli 
download BAAI/bge-small-en-v1.5 --local-dir models/embeddings/bge-small-en-v1.5 
npm run ingest 
npm start&lt;/LI-CODE&gt;
&lt;P&gt;Ingestion writes the local SQLite database to &lt;CODE&gt;data/rag.db&lt;/CODE&gt;. If the embedding model is available, each chunk gets a dense vector as well as lexical features. If the embedding model is missing, ingestion still succeeds and the application remains usable in lexical mode.&lt;/P&gt;
&lt;DIV class="note"&gt;Best practice: local AI applications should treat model files, SQLite data, and native runtime compatibility as part of the deployable system, not as optional developer conveniences.&lt;/DIV&gt;
&lt;/SECTION&gt;
&lt;SECTION&gt;
&lt;H2&gt;Code walkthrough&lt;/H2&gt;
&lt;H3&gt;1. Retrieval configuration&lt;/H3&gt;
&lt;P&gt;The sample makes its retrieval behaviour explicit in configuration. That is useful for testing and for operator visibility.&lt;/P&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;LI-CODE lang=""&gt;export const config = {
  model: "phi-3.5-mini",
  docsDir: path.join(ROOT, "docs"),
  dbPath: path.join(ROOT, "data", "rag.db"),
  chunkSize: 200,
  chunkOverlap: 25,
  topK: 3,
  retrievalMode: process.env.RETRIEVAL_MODE || "hybrid",
  retrievalModes: ["lexical", "semantic", "hybrid"],
  fallbackRetrievalMode: "lexical",
  retrievalWeights: {
    lexical: 0.45,
    semantic: 0.55,
  },
};&lt;/LI-CODE&gt;&lt;BR /&gt;
&lt;P&gt;Those defaults tell you a lot about the intended operating profile. Chunks are small, the number of returned chunks is low, and the fallback path is explicit.&lt;/P&gt;
&lt;H3&gt;2. Local ONNX embeddings&lt;/H3&gt;
&lt;P&gt;The embedding engine disables remote model loading and only uses local files. That matters for privacy, repeatability, and air gapped operation.&lt;/P&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;LI-CODE lang=""&gt;env.allowLocalModels = true;
env.allowRemoteModels = false;

this.extractor = await pipeline("feature-extraction", resolvedPath, {
  local_files_only: true,
});

const output = await this.extractor(text, {
  pooling: "mean",
  normalize: true,
});&lt;/LI-CODE&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;P&gt;The mean pooling and normalisation step make the vectors suitable for cosine similarity based ranking.&lt;/P&gt;
&lt;H3&gt;3. Hybrid storage and ranking in SQLite&lt;/H3&gt;
&lt;P&gt;Instead of adding a separate vector database, the sample stores lexical and semantic representations in the same SQLite table. That keeps the local footprint low and the implementation easy to debug.&lt;/P&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;LI-CODE lang=""&gt;searchHybrid(query, queryEmbedding, topK = 5, weights = { lexical: 0.45, semantic: 0.55 }) {
  const lexicalResults = this.searchLexical(query, topK * 3);
  const semanticResults = this.searchSemantic(queryEmbedding, topK * 3);

  if (semanticResults.length === 0) {
    return lexicalResults.slice(0, topK).map((row) =&amp;gt; ({
      ...row,
      retrievalMode: "lexical",
    }));
  }

  const fused = [...combined.values()].map((row) =&amp;gt; ({
    ...row,
    score: (row.lexicalScore * lexicalWeight) + (row.semanticScore * semanticWeight),
  }));

  fused.sort((a, b) =&amp;gt; b.score - a.score);
  return fused.slice(0, topK);
}&lt;/LI-CODE&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;P&gt;The important point is not just the weighted fusion. It is the fallback behaviour. If semantic retrieval cannot provide results, the user still gets lexical grounding instead of an empty context window.&lt;/P&gt;
&lt;H3&gt;4. Retrieval mode resolution in ChatEngine&lt;/H3&gt;
&lt;P&gt;&lt;CODE&gt;ChatEngine&lt;/CODE&gt; keeps the runtime behaviour predictable. It validates the requested mode and falls back to lexical search when semantic retrieval is unavailable.&lt;/P&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;LI-CODE lang=""&gt;resolveRetrievalMode(requestedMode) {
  const desiredMode = config.retrievalModes.includes(requestedMode)
    ? requestedMode
    : config.retrievalMode;

  if ((desiredMode === "semantic" || desiredMode === "hybrid") &amp;amp;&amp;amp; !this.semanticAvailable) {
    return config.fallbackRetrievalMode;
  }

  return desiredMode;
}&lt;/LI-CODE&gt;
&lt;P&gt;This is a sensible production design because local runtime failures are common. Missing model files or native dependency mismatches should reduce quality, not crash the entire assistant.&lt;/P&gt;
&lt;H3&gt;5. Foundry Local model management&lt;/H3&gt;
&lt;P&gt;The sample uses &lt;CODE&gt;FoundryLocalManager&lt;/CODE&gt; to discover, download, cache, and load the configured chat model.&lt;/P&gt;
&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;
&lt;LI-CODE lang=""&gt;const manager = FoundryLocalManager.create({ appName: "gas-field-local-rag" });
const catalog = manager.catalog;

this.model = await catalog.getModel(config.model);

if (!this.model.isCached) {
  await this.model.download((progress) =&amp;gt; {
    const pct = Math.round(progress * 100);
    this._emitStatus("download", `Downloading ${this.modelAlias}... ${pct}%`, progress);
  });
}

await this.model.load();
this.chatClient = this.model.createChatClient();
this.chatClient.settings.temperature = 0.1;&lt;/LI-CODE&gt;
&lt;P&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;This gives the app a better local startup experience. The server can expose a status stream while the model initialises in the background.&lt;/SPAN&gt;&lt;/P&gt;
&lt;/SECTION&gt;
&lt;SECTION&gt;
&lt;H2&gt;User experience and screenshots&lt;/H2&gt;
&lt;P&gt;The client is intentionally simple, which makes it useful during evaluation. You can switch retrieval mode, test questions quickly, and inspect the retrieved sources.&lt;/P&gt;
&lt;FIGURE&gt;&lt;IMG src="https://raw.githubusercontent.com/leestott/local-hybrid-retrival-onnx/main/screenshots/01-landing-page.png" alt="Landing page showing the gas field support agent UI in hybrid mode" /&gt;
&lt;FIGCAPTION&gt;The landing page exposes retrieval mode directly in the UI. That makes it easy to compare lexical, semantic, and hybrid behaviour during testing.&lt;/FIGCAPTION&gt;
&lt;/FIGURE&gt;
&lt;FIGURE&gt;&lt;IMG src="https://raw.githubusercontent.com/leestott/local-hybrid-retrival-onnx/main/screenshots/04-sources-panel.png" alt="Chat response showing sources panel and hybrid retrieval scores" /&gt;
&lt;FIGCAPTION&gt;The sources panel shows grounding evidence and retrieval scores, which is useful when validating whether better answers are coming from better retrieval or just model phrasing.&lt;/FIGCAPTION&gt;
&lt;/FIGURE&gt;
&lt;/SECTION&gt;
&lt;SECTION&gt;
&lt;H2&gt;Best practices for ONNX RAG and Foundry Local&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;Keep lexical fallback alive. Exact identifiers and runtime failures both make this necessary.&lt;/LI&gt;
&lt;LI&gt;Persist sparse and dense features together where possible. It simplifies debugging and operational reasoning.&lt;/LI&gt;
&lt;LI&gt;Use small chunks and conservative &lt;CODE&gt;topK&lt;/CODE&gt; values for local context budgets.&lt;/LI&gt;
&lt;LI&gt;Expose health and status endpoints so users can see when the model is still loading or embeddings are unavailable.&lt;/LI&gt;
&lt;LI&gt;Test retrieval quality separately from generation quality.&lt;/LI&gt;
&lt;LI&gt;Pin and validate native runtime dependencies, especially ONNX Runtime, before tuning prompts.&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="note"&gt;Practical warning: this repository already shows why runtime validation matters. A local app can ingest documents successfully and still fail at model initialisation if the native runtime stack is misaligned.&lt;/DIV&gt;
&lt;/SECTION&gt;
&lt;SECTION&gt;
&lt;H2&gt;How this compares with RAG and CAG&lt;/H2&gt;
&lt;P&gt;The strongest value in this sample comes from where it sits between a basic local RAG baseline and a curated CAG design.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Dimension&lt;/th&gt;&lt;th&gt;Classic local RAG&lt;/th&gt;&lt;th&gt;This hybrid ONNX RAG sample&lt;/th&gt;&lt;th&gt;CAG&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;Context assembly&lt;/td&gt;&lt;td&gt;Retrieve chunks at query time, often lexically, then inject them into the prompt.&lt;/td&gt;&lt;td&gt;Retrieve chunks at query time with lexical, semantic, or fused scoring, then inject the strongest results into the prompt.&lt;/td&gt;&lt;td&gt;Use a prepared or cached context pack instead of fresh retrieval for every request.&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Main strength&lt;/td&gt;&lt;td&gt;Easy to implement and easy to explain.&lt;/td&gt;&lt;td&gt;Better recall for paraphrases without giving up exact match behaviour or offline execution.&lt;/td&gt;&lt;td&gt;Predictable prompts and low query time overhead.&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Main weakness&lt;/td&gt;&lt;td&gt;Misses synonyms and natural language reformulations.&lt;/td&gt;&lt;td&gt;More moving parts, larger local asset footprint, and native runtime compatibility to manage.&lt;/td&gt;&lt;td&gt;Coverage depends on curation quality and goes stale more easily.&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Failure behaviour&lt;/td&gt;&lt;td&gt;Weak retrieval leads to weak grounding.&lt;/td&gt;&lt;td&gt;Semantic failure can degrade to lexical retrieval if designed properly, which this sample does.&lt;/td&gt;&lt;td&gt;Prepared context can be too narrow for new or unexpected questions.&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Best fit&lt;/td&gt;&lt;td&gt;Simple local assistants and proof of concept systems.&lt;/td&gt;&lt;td&gt;Offline copilots and technical assistants that need stronger recall across varied phrasing.&lt;/td&gt;&lt;td&gt;Stable workflows with tightly bounded, curated knowledge.&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H3&gt;Samples&lt;/H3&gt;
&lt;P&gt;Related samples:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Foundry Local RAG - &lt;A class="lia-external-url" href="https://github.com/leestott/local-rag" target="_blank"&gt;https://github.com/leestott/local-rag&lt;/A&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Foundry Local CAG - &lt;A class="lia-external-url" href="https://github.com/leestott/local-cag" target="_blank"&gt;https://github.com/leestott/local-cag&lt;/A&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Foundry Local hybrid-retrival-onnx &lt;A class="lia-external-url" href="https://github.com/leestott/local-hybrid-retrival-onnx" target="_blank"&gt;https://github.com/leestott/local-hybrid-retrival-onnx&lt;/A&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Specific benefits of this hybrid approach over classic RAG&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;It captures paraphrased questions that lexical search would often miss.&lt;/LI&gt;
&lt;LI&gt;It still preserves exact match performance for codes, terms, and product names.&lt;/LI&gt;
&lt;LI&gt;It gives operators a controlled degradation path when the semantic stack is unavailable.&lt;/LI&gt;
&lt;LI&gt;It stays local and inspectable without introducing a separate hosted vector service.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Specific differences from CAG&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;CAG shifts effort into context curation before the request. This sample retrieves evidence dynamically at runtime.&lt;/LI&gt;
&lt;LI&gt;CAG can be faster for fixed workflows, but it is usually less flexible when the document set changes.&lt;/LI&gt;
&lt;LI&gt;This hybrid RAG design is better suited to open ended knowledge search and growing document collections.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/SECTION&gt;
&lt;SECTION&gt;
&lt;H2&gt;What to validate before shipping&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;Measure retrieval quality in each mode using exact term, acronym, and paraphrase queries.&lt;/LI&gt;
&lt;LI&gt;Check that sources shown in the UI reflect genuinely distinct evidence, not repeated chunks.&lt;/LI&gt;
&lt;LI&gt;Confirm the application remains usable when semantic retrieval is unavailable.&lt;/LI&gt;
&lt;LI&gt;Verify ONNX Runtime compatibility on the real target machines, not only on the development laptop.&lt;/LI&gt;
&lt;LI&gt;Test model download, cache, and startup behaviour with a clean environment.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/SECTION&gt;
&lt;SECTION&gt;
&lt;H2&gt;Final take&lt;/H2&gt;
&lt;P&gt;For developers getting started with ONNX RAG and Foundry Local, this sample is a good technical reference because it demonstrates a realistic local architecture rather than a minimal demo. It shows how to build a grounded assistant that remains offline, supports multiple retrieval modes, and fails gracefully.&lt;/P&gt;
&lt;P&gt;Compared with classic local RAG, the hybrid design provides better recall and better resilience. Compared with CAG, it remains more flexible for changing document sets and less dependent on pre curated context packs. If you want a practical starting point for offline grounded AI on developer workstations or edge devices, this is the most balanced pattern in the repository set.&lt;/P&gt;
&lt;/SECTION&gt;
&lt;/ARTICLE&gt;
&lt;/MAIN&gt;</description>
      <pubDate>Thu, 26 Mar 2026 07:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/educator-developer-blog/build-an-offline-hybrid-rag-stack-with-onnx-and-foundry-local/ba-p/4503589</guid>
      <dc:creator>Lee_Stott</dc:creator>
      <dc:date>2026-03-26T07:00:00Z</dc:date>
    </item>
    <item>
      <title>Advice for Startups: Build Without Waiting with Replit</title>
      <link>https://techcommunity.microsoft.com/t5/student-developer-blog/advice-for-startups-build-without-waiting-with-replit/ba-p/4505603</link>
      <description>&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Before a single feature is tested, founders often find themselves setting up environments, choosing frameworks, and trying to learn just enough to get started. What should be a quick step forward turns into unnecessary delay.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;For teams in the Microsoft Imagine Cup, that time matters. They are building in real time, refining their solutions while balancing everything else that comes with being a student founder.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;During the Builder Series,&amp;nbsp;&lt;/SPAN&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/horacio-lopez/" target="_blank"&gt;Horacio Lopez&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;from&amp;nbsp;&lt;/SPAN&gt;&lt;A class="lia-external-url" href="https://replit.com/" target="_blank"&gt;Replit&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt; walked through how founders can move from idea to product faster, sharing a build approach centered on starting immediately and learning along the way.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Start Before You Feel&amp;nbsp;Ready&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Early-stage founders often wait longer than they need to.&amp;nbsp;They look for the right tools, the right language, or the right structure before taking action.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;That hesitation slows progress.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Today, that model is shifting. Founders can begin with an idea and start building right away, using their tools to explore, test, and refine in real time. Instead of preparing to build, they are building to learn.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;That shift creates momentum&amp;nbsp;early, when&amp;nbsp;it matters most.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Turn&amp;nbsp;Building into Your Learning Process&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Software development has traditionally followed a sequence: learn first, then build.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;That sequence is changing.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;With AI-supported environments, founders can understand how things work while they are actively creating them. They can ask questions within the build process, adjust in real time, and recognize patterns as they go.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This shortens the distance between&amp;nbsp;idea&amp;nbsp;and execution. It also allows founders to expand their capabilities without needing to master everything upfront.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Reduce Friction Across the Process&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Startups move quickly, and founders often shift between roles throughout the day. Product, development, and deployment are no longer separate phases. They are part of a continuous flow.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;When those steps are spread across disconnected tools, progress slows.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;By bringing the build process into one environment,&amp;nbsp;Replit&amp;nbsp;helps reduce that friction. Founders can stay focused, move faster, and spend more time refining their product instead of managing&amp;nbsp;setup&amp;nbsp;and transitions.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;From Idea to Product&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Building a product will always require effort. That part does not change.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;What is changing is how quickly founders can move from concept to something real.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;When the barrier to building is lowered, more ideas can be tested and improved. Founders are no longer limited by how much they know before they begin, but by how quickly they are willing to start.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;For teams in the Imagine Cup and beyond, that shift is meaningful. Because progress is no longer defined by preparation.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;It is defined by action.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 25 Mar 2026 15:45:33 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/student-developer-blog/advice-for-startups-build-without-waiting-with-replit/ba-p/4505603</guid>
      <dc:creator>StudentDeveloperTeam</dc:creator>
      <dc:date>2026-03-25T15:45:33Z</dc:date>
    </item>
  </channel>
</rss>

