rag
91 TopicsThe Future of Agentic AI: Inside Microsoft Agent Framework 1.0
Agentic AI is rapidly moving beyond demos and chatbots toward long‑running, autonomous systems that reason, call tools, collaborate with other agents, and operate reliably in production. On April 3, 2026, Microsoft marked a major milestone with the General Availability (GA) release of Microsoft Agent Framework 1.0, a production‑ready, open‑source framework for building agents and multi‑agent workflows in.NET and Python. [techcommun...rosoft.com] In this post, we’ll deep‑dive into: What Microsoft Agent Framework actually is Its core architecture and design principles What’s new in version 1.0 How it differs from other agent frameworks When and how to use it—with real code examples What Is Microsoft Agent Framework? According to the official announcement, Microsoft Agent Framework is an open‑source SDK and runtime for building AI agents and multi‑agent workflows with strong enterprise foundations. Agent Framework provides two primary capability categories: 1. Agents Agents are long‑lived runtime components that: Use LLMs to interpret inputs Call tools and MCP servers Maintain session state Generate responses They are not just prompt wrappers, but stateful execution units. 2. Workflows Workflows are graph‑based orchestration engines that: Connect agents and functions Enforce execution order Support checkpointing and human‑in‑the‑loop scenarios This leads to a clean separation of responsibilities: Concern Handled By Reasoning & interpretation Agent Execution policy & control flow Workflow This separation is a foundational design decision. High‑Level Architecture From the official overview, Agent Framework is composed of several core building blocks: Model clients (chat completions & responses) Agent sessions (state & conversation management) Context providers (memory and retrieval) Middleware pipeline (interception, filtering, telemetry) MCP clients (tool discovery and invocation) Workflow engine (graph‑based orchestration) Conceptual Flow 🌟 What’s New in Version 1.0 Version 1.0 marks the transition from "Release Candidate" to "General Availability" (GA). Production-Ready Stability: Unlike the earlier experimental packages, 1.0 offers stable APIs, versioned releases, and a commitment to long-term support (LTS). A2A Protocol (Agent-to-Agent): A new structured messaging protocol that allows agents to communicate across different runtimes. For example, an agent built in Python can seamlessly coordinate with an agent running in a .NET environment. MCP (Model Context Protocol) Support: Full integration with the Model Context Protocol, enabling agents to dynamically discover and invoke external tools and data sources without manual integration code. Multi-Agent Orchestration Patterns: Stable implementations of complex patterns, including: Sequential: Linear handoffs between specialized agents. Group Chat: Collaborative reasoning where agents discuss and solve problems. Magentic-One: A sophisticated pattern for task-oriented reasoning and planning. Middleware Pipeline: The new middleware architecture lets you inject logic into the agent's execution loop without modifying the core prompts. This is essential for Responsible AI (RAI), allowing you to add content safety filters, logging, and compliance checks globally. DevUI Debugger: A browser-based local debugger that provides a real-time visual representation of agent message flows, tool calls, and state changes. Code Examples Creating a Simple Agent (C#) From Microsoft Learn : using Azure.AI.Projects; using Azure.Identity; using Microsoft.Agents.AI; AIAgent agent = new AIProjectClient( new Uri("https://your-foundry-service.services.ai.azure.com/api/projects/your-project"), new AzureCliCredential()) .AsAIAgent( model: "gpt-5.4-mini", instructions: "You are a friendly assistant. Keep your answers brief."); Console.WriteLine(await agent.RunAsync("What is the largest city in France?")); This shows: Provider‑agnostic model access Session‑aware agent execution Minimal setup for production agents Creating a Simple Agent (Python) from agent_framework.foundry import FoundryChatClient from azure.identity import AzureCliCredential client = FoundryChatClient( project_endpoint="https://your-foundry-service.services.ai.azure.com/api/projects/your-project", model="gpt-5.4-mini", credential=AzureCliCredential(), ) agent = client.as_agent( name="HelloAgent", instructions="You are a friendly assistant. Keep your answers brief.", ) result = await agent.run("What is the largest city in France?") print(result) The same agent abstraction applies across languages. When to Use Agents vs Workflows Microsoft provides clear guidance: Use an Agent when… Use a Workflow when… Task is open‑ended Steps are well‑defined Autonomous tool use is needed Execution order matters Single decision point Multiple agents/functions collaborate Key principle: If you can solve the task with deterministic code, do that instead of using an AI agent. 🔄 How It Differs from Other Frameworks Microsoft Agent Framework 1.0 distinguishes itself by focusing on "Enterprise Readiness" and "Interoperability." Feature Microsoft Agent Framework 1.0 Semantic Kernel / AutoGen LangChain / CrewAI Philosophy Unified, production-ready SDK. Research-focused or tool-specific. High-level, developer-friendly abstractions. Integration Deeply integrated with Microsoft Foundry and Azure. Varied; often requires more glue code. Generally cloud-agnostic. Interoperability Native A2A and MCP for cross-framework tasks. Limited to internal ecosystem. Uses proprietary connectors. Runtime Identical API parity for .NET and Python. Primarily Python-first (SK has C#). Primarily Python. Control Graph-based deterministic workflows. More non-deterministic/experimental. Mixture of role-based and agentic. 🛠️ Key Technical Components Agent Harness: The execution layer that provides agents with controlled access to the shell, file system, and messaging loops. Agent Skills: A portable, file-based or code-defined format for packaging domain expertise. Implementation Tip: If you are coming from Semantic Kernel, Microsoft provides migration assistants that analyze your existing code and generate step-by-step plans to upgrade to the new Agent Framework 1.0 standards. Microsoft Agent Framework Version 1.0 | Microsoft Agent Framework Agent Framework documentation 🎯 Summary Microsoft Agent Framework 1.0 is the "grown-up" version of AI orchestration. By standardizing the way agents talk to each other (A2A), discover tools (MCP), and process information (Middleware), Microsoft has provided a clear path for taking AI experiments into production. For more detailed guides, check out the official Microsoft Agent Framework DocumentationMicrosoft Agent Framework - .NET AI Community StandupBuild AI RAG Apps with LangChain, Azure DocumentDB and Microsoft Foundry: Step-by-Step Guide
Scenario Imagine you are building your company’s RAG chat application using Microsoft Foundry - Azure OpenAI and orchestrating the flow with LangChain. The chat experience works, but now it needs to be grounded in your company’s data. You generate embeddings and want to store and query them without adding another database or complex sync pipeline. Instead of stitching services together, you use Azure DocumentDB (with MongoDB compatibility) with built-in vector search to store your JSON data and embeddings in one place. You deploy the app to Azure App Service and quickly compare vector search alone versus a full RAG pipeline, sharing it with your team for testing. What will you learn? In this blog, you'll learn to: Create an Azure DocumentDB (with MongoDB compatibility) resource. Create an embeddings and a chat deployment in Microsoft Foundry Azure OpenAI portal. Create an Azure App Service website with continuous deployment from GitHub. Configure Azure App Service application settings to enable communication between Azure resources. Configure GitHub workflow to work successfully. What is the main objective? Build AI Powered RAG Application using LangChain, Microsoft Foundry Azure OpenAI, and Azure DocumentDB (with MongoDB compatibility): Step-by-Step Guide Prerequisites An Azure subscription. If you don’t already have one, you can sign up for an Azure free account. For students, you can use the free Azure for Students offer which doesn’t require a credit card only your school email. A GitHub account. Summary of the steps: Step 1: Create an Azure DocumentDB (with MongoDB compatibility) resource Step 2: Create a Microsoft Foundry - Azure OpenAI resource and Deploy chat and embedding Models Step 3: Create an Azure App Service and Deploy the RAG Chat Application Step 1: Create an Azure DocumentDB (with MongoDB compatibility) resource In this step, you'll: Open the Azure Portal. Create an Azure DocumentDB (with MongoDB compatibility) resource. Open the Azure Portal 1. Visit the Azure Portal https://portal.azure.com in your browser and sign in. Now you are inside the Azure portal! Create a new Azure DocumentDB (with MongoDB compatibility) resource In this step, you create an Azure DocumentDB (with MongoDB compatibility) resource to store your data, vector embedding, and perform vector search. 1. Type documentdb in the search bar at the top of the portal page and select Azure DocumentDB (with MongoDB compatibility) from the available options. 2. Select Create from the toolbar to start provisioning your new cluster. 3. Add the following information to create a resource: What Value Subscription Use your preferred subscription. It's advised to use the same subscription across all the resources that communicate with each other on Azure. Resource group Select Create new to create a new resource group. Enter a unique name for the resource group. Cluster name Enter a globally unique name. Location Select a region close to you for the best response time. For example, Select UK South. MongoDB version Select the latest available version of MongoDB 4. Select Configure to configure your cluster tier. 5. Add the following information to configure the cluster tier. You can scale it up later: What Value Cluster tier Select M25 tier, 2 (Burstable) vCores. Storage Select 32 GiB. 6. Select Save. 7. Enter the cluster Admin Username and Password and store them in a secure location. 8. Select Next to configure the networking settings. 9. Select Allow Public Access from Azure services and resources within the Azure to this cluster. 10. Select Add current IP address to the firewall rules to allow local access to the cluster. 11. Select Review + create. 12. Confirm your configuration settings and select Create to start provisioning the resource. Note: The cluster creation can take up to 10 minutes. It's recommended to move on with the rest of the steps and get back to it later. Step 2: Create a Microsoft Foundry - Azure OpenAI resource and Deploy chat and embedding Models In this step, you'll: Create a Microsoft Foundry Azure OpenAI resource. Create chat and embedding model deployments. Create an Azure OpenAI resource In this step, you create an Azure OpenAI Service resource that enables you to interact with different large language models (LLMs). 1. Type openai in the search bar at the top of the portal page and select Azure OpenAI from the available options. 2. Select Create from the toolbar then select Azure OpenAI to provision a new Azure OpenAI resource. 3. Add the following information to create a resource: What Value Subscription Use the same subscription you used to apply for Azure OpenAI access. Resource group Use the resource group you created in the previous step. Region Select a region close to you for the best response time. For example, Select UK South. Name Enter a globally unique name. Pricing tier Select S0. Currently, this is the only available pricing tier. 4. Now that the basic information is added, select Next to confirm your details and proceed to the next page. 5. Select Next to confirm your network details. 6. Select Next to confirm your tag details. 7. Confirm your configuration settings and select Create to start provisioning the resource. Wait for the deployment to finish. 8. After the deployment finishes, select Go to resource to inspect your created resource. Here, you can manage your resource and find important information like the endpoint URL and API keys. Create chat and embedding model deployments In this step, you create an Azure OpenAI embedding model deployment and a chat model deployment. Creating a deployment on your previously provisioned resource allows you to generate text embeddings (i.e. numerical representation for text) and have a natural language conversation with your data. 1. Select Go to Foundry portal from the toolbar to open the studio. 2. Select Deployments from the Shared resources left side menu to go to the deployments tab. 3. Select + Deploy model from the toolbar then select Deploy base model from the options. A Deploy model window opens. 4. Type gpt-4o-mini to search for the model then select it then select Use model. 5. Select Continue with existing setup to proceed to next step. 6. Refresh page and repeat previous steps to select the model then select Confirm. 7. Review selected options then select Deploy. 8. Select + Deploy model from the toolbar then select Deploy base model from the options. A Deploy model window opens. 9. Type text-embedding-3-small to search for the model then select it then select Confirm. 10. Review selected options then select Deploy. Step 3: Create an Azure App Service and Deploy the RAG Chat Application In this step, you'll: Fork the sample repository on GitHub. Create an Azure App Service resource with a deployment from GitHub. Modify Azure App Service Application settings in the Azure portal. Configure the workflow to deploy your application from GitHub. Test the website before and after adding the data. Fork the Sample Repository on GitHub In this step, you create a copy from the source code on your GitHub account to be able to edit it and use it later. 1. Visit the sample github.com/Azure-Samples/Cosmic-Food-RAG-app in your browser and sign in. 2. Select Fork from the top of the sample page. 3. Select an owner for the fork then, select Create fork. Create an Azure App Service resource with a deployment from GitHub In this step, you create an Azure App service resource and connect it with your GitHub account to deploy a Python application. 1. Type app service in the search bar at the top of the portal page and select App Services from the available options. 2. Select Create Web App from the toolbar to start provisioning a new web application. 3. Add the following information to fill in the basic configuration of the application: What Value Subscription Use the same subscription you used to apply for Azure OpenAI access. Resource group Use the same resource group you created before. Name Enter a unique name for your website. For example, cosmic-food-rag. Publish? Select Code. This option specifies whether your deployment consists of code or a container. Runtime stack Select Python 3.12. Operating System Select Linux. Region Select UK South. This is the region where the rest of the resources you created reside. 4. Add the following information to create the app service plan. You can scale it up later: What Value Linux Plan Select a pre-existing plan or create a new plan. Pricing Plan Select Basic B1. 5. Select Deployment from the toolbar to move to the deployment configuration tab. 6. Add the following information to enable continuous deployment from GitHub: What Value Continuous deployment Select Enable. GitHub account Select your GitHub account. Organization Select your organization. If you are using your personal account then select it. Repository Select Cosmic-Food-RAG-app. Branch Select main. 7. Select Review + create. 8. Confirm your configuration settings and select Create to start provisioning the resource. Wait for the deployment to finish. 9. After the deployment finishes, select Go to resource to inspect your created resource. Here, you can manage your resource and find important information like the application settings and logs. Modify Azure App service Application settings in the Azure portal In this step, you configure the Application settings to make the website able to communicate with other cloud resources. 1. In the Web App resource, select Environment variables from the left side menu. 2. Select + Add to add new environment variables to the function configuration. 3. Add the following names and values one by one and select Ok. Make sure to add your own values. These application settings are for the Azure OpenAI resources that you created: What Value OPENAI_API_VERSION 2024-10-21 AZURE_OPENAI_CHAT_DEPLOYMENT_NAME gpt-4o-mini AZURE_OPENAI_CHAT_MODEL_NAME gpt-4o-mini AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME text-embedding-3-small AZURE_OPENAI_EMBEDDINGS_MODEL_NAME text-embedding-3-small AZURE_OPENAI_EMBEDDINGS_DIMENSIONS 1536 AZURE_OPENAI_DEPLOYMENT_NAME <azureOpenAiResourceName> AZURE_OPENAI_ENDPOINT https://<azureOpenAiResourceName>.openai.azure.com/ AZURE_OPENAI_API_KEY <azureOpenAiResourceKey> You can get the Azure OpenAI key from the Azure OpenAI resource page. Select Keys and Endpoint from the Resource Management section and copy any of the available keys. These application settings are for Azure DocumentDB (with MongoDB compatibility): AZURE_COSMOS_USERNAME <documentUsername> AZURE_COSMOS_PASSWORD <documentPassword> AZURE_COSMOS_CONNECTION_STRING mongodb+srv://<user>:<password>@<clusterName>.global.mongocluster.cosmos.azure.com/?tls=true&authMechanism=SCRAM-SHA-256&retrywrites=false&maxIdleTimeMS=120000 You can get the DocumentDB connection string from the Azure DocumentDB (with MongoDB compatibility) resource page. Select Connection strings and copy the connection string. Make sure to replace the user and password with the ones you created. These application settings are new and are used for resources that will be created when the application starts you can use any value for them: AZURE_COSMOS_DATABASE_NAME <documentDatabaseName> ex. CosmicDB AZURE_COSMOS_COLLECTION_NAME <documentContainerName> ex. CosmicFoodCollection AZURE_COSMOS_INDEX_NAME <documentIndexName> ex. CosmicIndex 4. Select Apply to save your newly added environment variables. 5. Select Configuration then Stack settings to edit the application startup command. 6. Type entrypoint.sh in the startup command field then select Apply. Configure the Workflow to deploy your application from GitHub In this step, you modify the GitHub deployment workflow to point to the folder that contains the application. 1. Visit your forked repository on GitHub and notice the failing workflow. 2. Open the workflow file .github/workflows/main_cosmic-food-rag.yml. 3. Open the file and select the pen icon to edit it. 4. Modify line 41 from . to src/. 5. Remove the optional Local Build Section since the application already has tests that cover this part. 6. Add this section to Install Node 22 and build the static frontend. 7. Select Commit changes, and review your commit message and description. Select Commit changes. The final workflow file should look like this: # Docs for the Azure Web Apps Deploy action: https://github.com/Azure/webapps-deploy # More GitHub Actions for Azure: https://github.com/Azure/actions # More info on Python, GitHub Actions, and Azure App Service: https://aka.ms/python-webapps-actions name: Build and deploy Python app to Azure Web App - cosmic-food-rag on: push: branches: - main workflow_dispatch: jobs: build: runs-on: ubuntu-latest permissions: contents: read #This is required for actions/checkout steps: - uses: actions/checkout@v4 - name: Set up Node 22 uses: actions/setup-node@v6 with: node-version: 22 - name: Install Node Packages & Build Static Site run: cd frontend && npm install && npm run build # By default, when you enable GitHub CI/CD integration through the Azure portal, the platform automatically sets the SCM_DO_BUILD_DURING_DEPLOYMENT application setting to true. This triggers the use of Oryx, a build engine that handles application compilation and dependency installation (e.g., pip install) directly on the platform during deployment. Hence, we exclude the antenv virtual environment directory from the deployment artifact to reduce the payload size. - name: Upload artifact for deployment jobs uses: actions/upload-artifact@v4 with: name: python-app path: | src/ !antenv/ # 🚫 Opting Out of Oryx Build # If you prefer to disable the Oryx build process during deployment, follow these steps: # 1. Remove the SCM_DO_BUILD_DURING_DEPLOYMENT app setting from your Azure App Service Environment variables. # 2. Refer to sample workflows for alternative deployment strategies: https://github.com/Azure/actions-workflow-samples/tree/master/AppService deploy: runs-on: ubuntu-latest needs: build permissions: id-token: write #This is required for requesting the JWT contents: read #This is required for actions/checkout steps: - name: Download artifact from build job uses: actions/download-artifact@v4 with: name: python-app - name: Login to Azure uses: azure/login@v2 with: client-id: ${{ secrets.AZUREAPPSERVICE_CLIENTID_5672547ED09F46D59DD431ACF5A29F28 }} tenant-id: ${{ secrets.AZUREAPPSERVICE_TENANTID_0059913572C8467882D3999D0E0DD5B8 }} subscription-id: ${{ secrets.AZUREAPPSERVICE_SUBSCRIPTIONID_7C42E3352C5D47F084CB0CD14F549D27 }} - name: 'Deploy to Azure Web App' uses: azure/webapps-deploy@v3 id: deploy-to-webapp with: app-name: 'cosmic-food-rag' slot-name: 'Production' 8. Select Actions to review the workflow run status. Test the website before and After adding the data In this step, you test the application before adding the data, add the data, and test again. 1. Select the workflow name to open it and get the website URL. 2. Select any of the suggested messages or type your own and it should respond with No results found. 3. Navigate to your Azure App Service resource page and select SSH then select Go to open a new SSH page. 4. In the SSH terminal, run these commands: uv sync --active uv run --active ./scripts/add_data.py --file="./data/food_items.json" 5. Navigate back to the live website and type in the chat message Do you have any vegan food dishes? and it should respond with the correct answer now. Congratulations!! You successfully built the full application. Clean Up Once you finish experimenting on Microsoft Azure you might want to delete the resources to not consume any more money from your subscription. You can delete the resource group and it will delete everything inside it or delete the resources one by one that's totally up to you. Conclusion Congratulations! You've learned how to create an Azure DocumentDB (with MongoDB compatibility) cluster, how to create a Microsoft Foundry - Azure OpenAI resource, how to deploy an embedding model and a chat model from the Foundry portal, how to create an Azure App Service and configure continuous deployment with GitHub, and how to modify application settings to enable the communication across Azure resources. By using these technologies, you can build a RAG chat application with the option to perform vector search too over your own data and provide grounded (relevant) responses. Next steps Documentation Azure OpenAI in Microsoft Foundry models Understand embeddings in Azure OpenAI in Microsoft Foundry Models (classic) Azure DocumentDB (with MongoDB compatibility) documentation Integrated vector store in Azure DocumentDB LangChain Python documentation Training Content Develop generative AI apps in Azure Found this useful? Share it with others and follow me to get updates on: Twitter (twitter.com/john00isaac) LinkedIn (linkedin.com/in/john0isaac) Feel free to share your comments and/or inquiries in the comment section below.. See you in future demos!209Views0likes0CommentsClaim your IQ Series: Foundry IQ badge
The IQ Series kicked off with three Foundry IQ episodes, each paired with a hands-on cookbook. If you've worked through all three or you're planning to, there's now a digital badge waiting for you to claim! What the badge represents The IQ Series: Foundry IQ badge recognizes developers who've completed the full Foundry IQ curriculum end-to-end: not just watched the episodes, but deployed the Azure resources, run every notebook, and built working knowledge bases against live data. Earners have: Deployed AI Search, Azure OpenAI, a Foundry project, and Azure Blob Storage with seeded sample data Connected structured and unstructured sources into Foundry IQ Built and queried multi-source AI knowledge bases Grounded agent responses in permission-aware enterprise knowledge Badges are issued by the Global AI Community, so you'll want an account there before you submit. What the three episodes cover Episode 1 — Unlocking Knowledge for Your Agents. Introduces Foundry IQ and the core ideas behind it. The episode explains how AI agents work with knowledge and walks through the main components of Foundry IQ that support knowledge-driven applications. Episode 2 — Building the Data Pipeline with Knowledge Sources. Focuses on Knowledge Sources and how different types of content flow into Foundry IQ across SharePoint, Fabric, OneLake, Azure Blob Storage, Azure AI Search, and the web. Episode 3 — Querying the Multi-Source AI Knowledge Bases. Dives into Knowledge Bases and how multiple knowledge sources can be organized behind a single endpoint. The episode demonstrates how AI systems query across these sources and synthesize information to answer complex questions. Each episode is paired with a cookbook for you to learn hands-on and each of them reuses the same Azure deployment, so you set up once and build across all three. How to claim the badge Four steps, in order: Fork the IQ Series repo and work through all three episode cookbooks in your fork. Commit your notebooks with cell outputs saved! That's the proof of completion. Capture a final output screenshot for each episode. Your GitHub username or Azure resource name needs to be visible in the screenshot. Submit a badge request issue. The template walks you through fork URLs, screenshots, and one brief technical takeaway per episode. Complete the badge form. This step is required. Without the form, we can't issue the badge. Why this badge is worth your time The IQ Series recognizes your hands-on learning with real infrastructure, real indexed data, real agents and queries. If you're working on enterprise AI (grounding, retrieval, knowledge-aware agents), this is a concrete artifact that says: I've built this, end to end, on the actual platform. Work IQ and Fabric IQ are coming next, and each phase will have its own badge. Foundry IQ is your head start on the full IQ Series. 👉 Start with Episodes or jump straight to the cookbooks if you prefer to learn by doing. Questions along the way? Create and issue in the repo or drop into our Discord. The Foundry IQ team and community are there to help.Building an Enterprise HR Chatbot with Multi-Strategy RAG and Live Agent Handoff on Azure
HR teams deal with thousands of employee questions every day — policy lookups, leave balances, case escalations, and sensitive topics like harassment or misconduct. AI chatbots can handle the routine stuff and free up HR advisors for the hard cases. But most chatbot projects get stuck at basic Q&A. They can't handle multi-country policies, employee slang, or smooth handoffs to a real person. This post covers how we built Eva, a production HR chatbot using Microsoft Bot Framework and Semantic Kernel on Azure. I'll focus on three problems and how we solved them: Getting accurate answers when employees and policy documents use different words Handing off to a live human advisor in real time Catching answer quality regressions automatically Why basic RAG isn't enough for HR Retrieval-Augmented Generation (RAG) — fetching relevant documents and feeding them to an LLM — is the standard approach. But plain RAG breaks down in HR for a few reasons: Vocabulary mismatch. An employee asks "How does misconduct affect my ACB?" but the policy document says "Annual Cash Bonus eligibility criteria." The search doesn't connect the two. Multi-country ambiguity. The same question can have different answers depending on the employee's country, grade, or role. Sensitive topics. Questions about harassment, disability, or whistleblowing should go to a human, not get an AI-generated answer. Ranking noise. Search results often include globally relevant but locally irrelevant documents. Eva handles these with a layered pipeline: query augmentation → multi-index search → LLM reranking → answer generation → citation handling. Architecture at a glance Layer Technology Bot framework Microsoft Agents SDK (aiohttp) LLM orchestration Semantic Kernel Primary LLM Azure OpenAI Service (GPT-4.1 / GPT-4o) Knowledge search Azure AI Search (hybrid + vector) Live agent chat Salesforce MIAW via server-sent events Evaluation Azure AI Evaluation SDK + custom LLM judge Config Pydantic-settings + Azure App Configuration + Key Vault Four retrieval strategies, controlled by feature flags Instead of one search approach, Eva supports four — toggled by feature flags so we can A/B test per country without code changes. They run in a priority cascade: HyDE (Hypothetical Document Embeddings) Instead of searching with the employee's question, the LLM first generates a hypothetical policy document thatwouldanswer it. We embed that synthetic document and use it as the search query. Since a hypothetical answer is closer in embedding space to the real answer than the original question is, this bridges vocabulary gaps effectively. Step-back prompting The LLM broadens the question. "How does misconduct affect my ACB?" becomes "What is the Annual Cash Bonus policy and what factors affect eligibility?" This works well when answers live in broader policy sections. Query rewrite The LLM expands abbreviations and adds HR domain context, then runs a hybrid (text + vector) search. Standard search (fallback) Basic intent classification with hybrid search. No augmentation. All four strategies return the same Pydantic model, so the rest of the pipeline doesn't care which one ran. The team can enable HyDE globally, roll out step-back to specific countries, or revert instantly if something underperforms. LLM reranking After pulling results from both a country-specific index and a global index, Eva optionally reranks them using a RankGPT-style approach — the LLM scores document relevance with a bias toward local content. If reranking fails for any reason, it falls back to the original ordering so the pipeline keeps moving. Answer generation with local vs. global context The answer stage separates retrieved documents into local context (country-specific) and global context (company-wide), injected as distinct prompt sections. The LLM returns a structured response with reasoning, the actual answer, citations, and a coverage classification (full, partial, or denial). Prompts are stored as version-controlled .txt files with per-model variants (e.g., gpt-4o.txt, gpt-4.1.txt), resolved at runtime. This makes prompts reviewable in PRs and deployable without code changes. Live agent handoff with Salesforce When Eva determines a question needs a human — sensitive topic, complex case, or the employee simply asks — it hands off to a Salesforce advisor in real time. SSE streaming. Eva keeps a persistent HTTP connection to Salesforce for real-time messages, typing indicators, and session end signals. Session resilience. Session state persists across three layers — in-memory cache, Azure Cosmos DB, and Bot Framework turn state — to survive restarts and failovers. Message delivery workers. Each session has a dedicated async worker with exponential backoff retry. Overflow messages go to a failed messages list rather than being silently dropped. Queue position updates. While employees wait, Eva queries Salesforce for queue position and sends rate-limited updates. Context handoff. On session start, Eva sends the full conversation transcript so advisors don't ask employees to repeat themselves. Automated evaluation Eva includes an evaluation framework that runs as a separate process, testing against ground-truth Q&A pairs from CSV files. Factual questions are scored using Azure AI's SimilarityEvaluator on a 1–5 scale, with optional relevance and groundedness checks. Sensitive questions (harassment, disability, whistleblowing) use a custom LLM judge that checks whether the response acknowledges sensitivity and directs the employee to create a case or speak with an advisor. A deviation detector flags score drops between runs. SQLite stores results for trending, and Application Insights powers dashboards. Long evaluation runs support resume — the framework skips already-completed test cases on restart. Key takeaways Make retrieval strategies swappable. Feature flags let you A/B test without redeploying. Separate local and global knowledge explicitly. Don't rely on the LLM to figure out which country's policy applies. Invest in evaluation early. Ground-truth datasets with factual and behavioral scoring catch regressions that manual testing misses. Build resilience into live agent handoff. Multi-tier session recovery and retry logic prevent dropped conversations. Treat prompts as code. File-based, model-variant-aware prompts are easier to maintain than inline strings. Use Pydantic for structured LLM outputs. Typed models catch bad output at the validation boundary instead of letting it propagate. Get started Semantic Kernel documentation — LLM orchestration with plugins and structured outputs Azure OpenAI Service quickstart — Deploy GPT-4o or GPT-4.1 Azure AI Search vector search tutorial — Hybrid and vector search indices Microsoft Bot Framework SDK — Build bots for Teams and web Azure AI Evaluation SDK — Score for similarity, relevance, and groundednessWhy Data Platforms Must Become Intelligence Platforms for AI Agents to Work
The promise and the gap Your organization has invested in an AI agent. You ask it: "Prepare a summary of Q3 revenue by region, including year-over-year trends and top product lines." The agent finds revenue numbers in a SQL warehouse, product metadata in Dataverse, regional mappings in SharePoint, historical data in Azure Blob Storage, and organizational context in Microsoft Graph. Five data sources. Five schemas. No shared definitions. The result? The agent hallucinates, returns incomplete data, or asks a dozen clarifying questions that defeat its purpose. This isn't a model limitation — modern AI models are highly capable. The real constraint is that enterprise data is not structured for reasoning. Traditional data platforms were built for humans to query. Intelligence platforms must be built for agents to _reason_ over. That distinction is the subject of this post. What you'll understand Why fragmented enterprise data blocks effective AI agents What distinguishes a storage platform from an intelligence platform How Microsoft Fabric and Azure AI Foundry work together to enable trustworthy, agent-ready data access The enterprise pain: Fragmented data breaks AI agents Enterprise data is spread across relational databases, data lakes, business applications, collaboration platforms, third-party APIs, and Microsoft Graph — each with its own schema and security model. Humans navigate this fragmentation through institutional knowledge and years of muscle memory. A seasoned analyst knows that "revenue" in the data warehouse means net revenue after returns, while "revenue" in the CRM means gross bookings. An AI agent does not. The cost of this fragmentation isn't hypothetical. Each new AI agent deployment can trigger another round of bespoke data preparation — custom integrations and transformation pipelines just to make data usable, let alone agent-ready. This approach doesn't scale. Why agents struggle without a semantic layer To produce a trustworthy answer, an AI agent needs: (1) **data access** to reach relevant sources, (2) **semantic context** to understand what the data _means_ (business definitions, relationships, hierarchies), and (3) **trust signals** like lineage, permissions, and freshness metadata. Traditional platforms provide the first but rarely the second or third — leaving agents to infer meaning from column names and table structures. This is fragile at best and misleading at worst. Figure 1: Without a shared semantic layer, AI agents must interpret raw, disconnected data across multiple systems — often leading to inconsistent or incomplete results. From storage to intelligence: What must change The fix isn't another ETL pipeline or another data integration tool. The fix is a fundamental shift in what we expect from a data platform. A storage platform asks: "Where is the data, and how do I access it?" An intelligence platform asks: "What does the data mean, who can use it, and how can an agent reason over it?" This shift requires four foundational pillars: Pillar 1: Unified data access OneLake, the data lake built into Microsoft Fabric, provides a single logical namespace across an organization. Whether data originates in a Fabric lakehouse, a warehouse, or an external storage account, OneLake makes it accessible through one interface — using shortcuts and mirroring rather than requiring data migration. This respects existing investments while reducing fragmentation. Pillar 2: Shared semantic layer Semantic models in Microsoft Fabric define business measures, table relationships, human-readable field descriptions, and row-level security. When an agent queries a semantic model instead of raw tables, it gets _answers_ — like `Total Revenue = $42.3M for North America in Q3` — not raw result sets requiring interpretation and aggregation. Before vs After: What changes for an agent? Without semantic layer: Queries raw tables Infers business meaning Risk of incorrect aggregation With semantic layer: Queries `[Total Revenue]` Uses business-defined logic Gets consistent, governed results Pillar 3: Context enrichment Microsoft Graph adds organizational signals — people and roles, activity patterns, and permissions — helping agents produce responses that are not just accurate, but _relevant_ and _appropriately scoped_ to the person asking. Pillar 4: Agent-ready APIs Data Agents in Microsoft Fabric (currently in preview) provide a natural-language interface to semantic models and lakehouses. Instead of generating SQL, an AI agent can ask: "What was Q3 revenue by region?" and receive a structured, sourced response. This is the critical difference: the platform provides structured context and business logic, helping reduce the reasoning burden on the agent. Figure 2: An intelligence platform adds semantic context, trust signals, and agent-ready APIs on top of unified data access — enabling AI agents to combine structured data, business definitions, and relationships to produce more consistent responses. Microsoft Fabric as the intelligence layer Microsoft Fabric is often described as a unified analytics platform. That description is accurate but incomplete. In the context of AI agents, Fabric's role is better understood as an **intelligence layer** — a platform that doesn't just store and process data, but _makes data understandable_ to autonomous systems. Let's look at each capability through the lens of agent readiness. OneLake: One namespace, many sources OneLake provides a single logical namespace backed by Azure Data Lake Storage Gen2. For AI agents, this means one authentication context, one discovery mechanism, and one governance surface. Key capabilities: **shortcuts** (reference external data without copying), **mirroring** (replicate from Azure SQL, Cosmos DB, or Snowflake), and a **unified security model**. For more on OneLake architecture, see [OneLake documentation on Microsoft Learn](https://learn.microsoft.com/fabric/onelake/onelake-overview). Semantic models: Business logic that agents can understand Semantic models (built on the Analysis Services engine) transform raw tables into business concepts: Raw Table Column Semantic Model Measure `fact_sales.amount` `[Total Revenue]` — Sum of net sales after returns `fact_sales.amount / dim_product.cost` `[Gross Margin %]` — Revenue minus COGS as a percentage `fact_sales.qty` YoY comparison `[YoY Growth %]` — Year-over-year quantity growth Code Snippet 1 — Querying a Fabric Semantic Model with Semantic Link (Python) import sempy.fabric as fabric # Query business-defined measures — no need to know underlying table schemas dax_query = """ EVALUATE SUMMARIZECOLUMNS( 'Geography'[Region], 'Calendar'[FiscalQuarter], "Total Revenue", [Total Revenue], "YoY Growth %", [YoY Growth %] ) """ result_df = fabric.evaluate_dax( dataset="Contoso Sales Analytics", workspace="Contoso Analytics Workspace", dax_string=dax_query ) print(result_df.head()) # NOTE: Output shown is illustrative and based on the semantic model definition # Output (illustrative): # Region FiscalQuarter Total Revenue YoY Growth % # North America Q3 FY2026 42300000 8.2 # Europe Q3 FY2026 31500000 5.7 Key takeaway: The agent doesn’t need to know that revenue is in `fact_sales.amount` or that fiscal quarters don’t align with calendar quarters. The semantic model handles all of this. Code Snippet 2 — Discovering Available Models and Measures (Python) Before an agent can query, it needs to _discover_ what data is available. Semantic Link provides programmatic access to model metadata — enabling agents to find relevant measures without hardcoded knowledge. import sempy.fabric as fabric # Discover available semantic models in the workspace datasets = fabric.list_datasets(workspace="Contoso Analytics Workspace") print(datasets[["Dataset Name", "Description"]]) # NOTE: Output shown is illustrative and based on the semantic model definition # Output (illustrative): # Dataset Name Description # Contoso Sales Analytics Revenue, margins, and growth metrics # Contoso HR Analytics Headcount, attrition, and hiring pipeline # Contoso Supply Chain Inventory, logistics, and supplier data # Inspect available measures — these are the business-defined metrics an agent can query measures = fabric.list_measures( dataset="Contoso Sales Analytics", workspace="Contoso Analytics Workspace" ) print(measures[["Table Name", "Measure Name", "Description"]]) # Output (illustrative): # Table Name Measure Name Description # Sales Total Revenue Sum of net sales after returns # Sales Gross Margin % Revenue minus COGS as a percentage # Sales YoY Growth % Year-over-year quantity growth Key takeaway: An agent can programmatically discover which semantic models exist and what measures they expose — turning the platform into a self-describing data catalog that agents can navigate autonomously. For more on Semantic Link, see the Semantic Link documentation on Microsoft Learn. Data Agents: Natural-language access for AI (preview) Note: Fabric Data Agents are currently in preview. See [Microsoft preview terms](https://learn.microsoft.com/legal/microsoft-fabric-preview) for details. A Data Agent wraps a semantic model and exposes it as a natural-language-queryable endpoint. An AI Foundry agent can register a Fabric Data Agent as a tool — when it needs data, it calls the Data Agent like any other tool. Important: In production scenarios, use managed identities or Microsoft Entra ID authentication. Always follow the [principle of least privilege](https://learn.microsoft.com/entra/identity-platform/secure-least-privileged-access) when configuring agent access. Microsoft Graph: Organizational context Microsoft Graph adds the final layer: who is asking (role-appropriate detail), what’s relevant (trending datasets), and who should review (data stewards). Fabric’s integration with Graph brings these signals into the data platform so agents produce contextually appropriate responses. Tying it together: Azure AI Foundry + Microsoft Fabric The real power of the intelligence platform concept emerges when you see how Azure AI Foundry and Microsoft Fabric are designed to work together. The integration pattern Azure AI Foundry provides the orchestration layer (conversations, tool selection, safety, response generation). Microsoft Fabric provides the data intelligence layer (data access, semantic context, structured query resolution). The integration follows a tool-calling pattern: 1.User prompt → End user asks a question through an AI Foundry-powered application. 2.Tool call → The agent selects the appropriate Fabric Data Agent and sends a natural-language query. 3.Semantic resolution → The Data Agent translates the query into DAX against the semantic model and executes it via OneLake. 4.Structured response → Results flow back through the stack, with each layer adding context (business definitions, permissions verification, data lineage). 5.User response → The AI Foundry agent presents a grounded, sourced answer to the user. Why these matters No custom ETL for agents — Agents query the intelligence platform directly No prompt-stuffing — The semantic model provides business context at query time No trust gap — Governed semantic models enforce row-level security and lineage No one-off integrations — Multiple agents reuse the same Data Agents Code Snippet 3 — Azure AI Foundry Agent with Fabric Data Agent Tool (Python) The following example shows how an Azure AI Foundry agent registers a Fabric Data Agent as a tool and uses it to answer a business question. The agent handles tool selection, query routing, and response grounding automatically. from azure.ai.projects import AIProjectClient from azure.ai.projects.models import FabricTool from azure.identity import DefaultAzureCredential # Connect to Azure AI Foundry project project_client = AIProjectClient.from_connection_string( credential=DefaultAzureCredential(), conn_str="<your-ai-foundry-connection-string>" ) # Register a Fabric Data Agent as a grounding tool # The connection references a Fabric workspace with semantic models fabric_tool = FabricTool(connection_id="<fabric-connection-id>") # Create an agent that uses the Fabric Data Agent for data queries agent = project_client.agents.create_agent( model="gpt-4o", name="Contoso Revenue Analyst", instructions="""You are a business analytics assistant for Contoso. Use the Fabric Data Agent tool to answer questions about revenue, margins, and growth. Always cite the source semantic model.""", tools=fabric_tool.definitions ) # Start a conversation thread = project_client.agents.create_thread() message = project_client.agents.create_message( thread_id=thread.id, role="user", content="What was Q3 revenue by region, and which region grew fastest?" ) # The agent automatically calls the Fabric Data Agent tool, # queries the semantic model, and returns a grounded response run = project_client.agents.create_and_process_run( thread_id=thread.id, agent_id=agent.id ) # Retrieve the agent's response messages = project_client.agents.list_messages(thread_id=thread.id) print(messages.data[0].content[0].text.value) # NOTE: Output shown is illustrative and based on the semantic model definition # Output (illustrative): # "Based on the Contoso Sales Analytics model, Q3 FY2026 revenue by region: # - North America: $42.3M (+8.2% YoY) # - Europe: $31.5M (+5.7% YoY) # - Asia Pacific: $18.9M (+12.1% YoY) — fastest growing # Source: Contoso Sales Analytics semantic model, OneLake" Key takeaway: The AI Foundry agent never writes SQL or DAX. It calls the Fabric Data Agent as a tool, which resolves the query against the semantic model. The response comes back grounded with source attribution — matching the five-step integration pattern described above. Figure 3: Each layer adds context — semantic models provide business definitions, Graph adds permissions awareness, and Data Agents provide the natural-language interface. Getting started: Practical next steps You don't need to redesign your entire data platform to begin this shift. Start with one high-value domain and expand incrementally. Step 1: Consolidate data access through OneLake Create OneLake shortcuts to your most critical data sources — core business metrics, customer data, financial records. No migration needed. [Create OneLake shortcuts](https://learn.microsoft.com/fabric/onelake/create-onelake-shortcut) Step 2: Build semantic models with business definitions For each major domain (sales, finance, operations), create a semantic model with key measures, table relationships, human-readable descriptions, and row-level security. [Create semantic models in Microsoft Fabric](https://learn.microsoft.com/fabric/data-warehouse/semantic-models) Step 3: Enable Data Agents (preview) Expose your semantic models as natural-language endpoints. Start with a single domain to validate the pattern. Note: Review the [preview terms](https://learn.microsoft.com/legal/microsoft-fabric-preview) and plan for API changes. [Fabric Data Agents overview](https://learn.microsoft.com/fabric/data-science/concept-data-agent) Step 4: Connect Azure AI Foundry agents Register Data Agents as tools in your AI Foundry agent configuration. Azure AI Foundry documentation Conclusion: The bottleneck isn't the model — it's the platform Models can reason, plan, and hold multi-turn conversations. But in the enterprise, the bottleneck for effective AI agents is the data platform underneath. Agents can’t reason over data they can’t find, apply business logic that isn’t encoded, respect permissions that aren’t enforced, or cite sources without lineage. The shift from storage to intelligence requires unified data access, a shared semantic layer, organizational context, and agent-ready APIs. Microsoft Fabric provides these capabilities, and its integration with Azure AI Foundry makes this intelligence layer accessible to AI agents. Disclaimer: Some features described in this post, including Fabric Data Agents, are currently in preview. Preview features may change before general availability, and their availability, functionality, and pricing may differ from the final release. See [Microsoft preview terms](https://learn.microsoft.com/legal/microsoft-fabric-preview) for details.Building Your First Local RAG Application with Foundry Local
A developer's guide to building an offline, mobile-responsive AI support agent using Retrieval-Augmented Generation, the Foundry Local SDK, and JavaScript. Imagine you are a gas field engineer standing beside a pipeline in a remote location. There is no Wi-Fi, no mobile signal, and you need a safety procedure right now. What do you do? This is the exact problem that inspired this project: a fully offline RAG-powered support agent that runs entirely on your machine. No cloud. No API keys. No outbound network calls. Just a local language model, a local vector store, and your own documents, all accessible from a browser on any device. In this post, you will learn how it works, how to build your own, and the key architectural decisions behind it. If you have ever wanted to build an AI application that runs locally and answers questions grounded in your own data, this is the place to start. The finished application: a browser-based AI support agent that runs entirely on your machine. What Is Retrieval-Augmented Generation? Retrieval-Augmented Generation (RAG) is a pattern that makes AI models genuinely useful for domain-specific tasks. Rather than hoping the model "knows" the answer from its training data, you: Retrieve relevant chunks from your own documents using a vector store Augment the model's prompt with those chunks as context Generate a response grounded in your actual data The result is fewer hallucinations, traceable answers with source attribution, and an AI that works with your content rather than relying on general knowledge. If you are building internal tools, customer support bots, field manuals, or knowledge bases, RAG is the pattern you want. RAG vs CAG: Understanding the Trade-offs If you have explored AI application patterns before, you have likely encountered Context-Augmented Generation (CAG). Both RAG and CAG solve the same core problem: grounding an AI model's answers in your own content. They take different approaches, and each has genuine strengths and limitations. RAG (Retrieval-Augmented Generation) How it works: Documents are split into chunks, vectorised, and stored in a database. At query time, the most relevant chunks are retrieved and injected into the prompt. Strengths: Scales to thousands or millions of documents Fine-grained retrieval at chunk level with source attribution Documents can be added or updated dynamically without restarting Token-efficient: only relevant chunks are sent to the model Supports runtime document upload via the web UI Limitations: More complex architecture: requires a vector store and chunking strategy Retrieval quality depends on chunking parameters and scoring method May miss relevant content if the retrieval step does not surface it CAG (Context-Augmented Generation) How it works: All documents are loaded at startup. The most relevant ones are selected per query using keyword scoring and injected into the prompt. Strengths: Drastically simpler architecture with no vector database or embeddings All information is always available to the model Minimal dependencies and easy to set up Near-instant document selection Limitations: Constrained by the model's context window size Best suited to small, curated document sets (tens of documents) Adding documents requires an application restart Want to compare these patterns hands-on? There is a CAG-based implementation of the same gas field scenario using whole-document context injection. Clone both repositories, run them side by side, and see how the architectures differ in practice. When Should You Choose Which? Consideration Choose RAG Choose CAG Document count Hundreds or thousands Tens of documents Document updates Frequent or dynamic (runtime upload) Infrequent (restart to reload) Source attribution Per-chunk with relevance scores Per-document Setup complexity Moderate (ingestion step required) Minimal Query precision Better for large or diverse collections Good for keyword-matchable content Infrastructure SQLite vector store (single file) None beyond the runtime For the sample application in this post (20 gas engineering procedure documents with runtime upload), RAG is the clear winner. If your document set is small and static, CAG may be simpler. Both patterns run fully offline using Foundry Local. Foundry Local: Your On-Device AI Runtime Foundry Local is a lightweight runtime from Microsoft that downloads, manages, and serves language models entirely on your device. No cloud account, no API keys, no outbound network calls (after the initial model download). What makes it particularly useful for developers: No GPU required: runs on CPU or NPU, making it accessible on standard laptops and desktops Native SDK bindings: in-process inference via the foundry-local-sdk npm package, with no HTTP round-trips to a local server Automatic model management: downloads, caches, and loads models automatically Hardware-optimised variant selection: the SDK picks the best variant for your hardware (GPU, NPU, or CPU) Real-time progress callbacks: ideal for building loading UIs that show download and initialisation progress The integration code is refreshingly minimal: import { FoundryLocalManager } from "foundry-local-sdk"; // Create a manager and discover models via the catalogue const manager = FoundryLocalManager.create({ appName: "gas-field-local-rag" }); const model = await manager.catalog.getModel("phi-3.5-mini"); // Download if not cached, then load into memory if (!model.isCached) { await model.download((progress) => { console.log(`Download: ${Math.round(progress * 100)}%`); }); } await model.load(); // Create a chat client for direct in-process inference const chatClient = model.createChatClient(); const response = await chatClient.completeChat([ { role: "system", content: "You are a helpful assistant." }, { role: "user", content: "How do I detect a gas leak?" } ]); That is it. No server configuration, no authentication tokens, no cloud provisioning. The model runs in the same process as your application. The Technology Stack The sample application is deliberately simple. No frameworks, no build steps, no Docker: Layer Technology Purpose AI Model Foundry Local + Phi-3.5 Mini Runs locally via native SDK bindings, no GPU required Back end Node.js + Express Lightweight HTTP server, everyone knows it Vector Store SQLite (via better-sqlite3 ) Zero infrastructure, single file on disc Retrieval TF-IDF + cosine similarity No embedding model required, fully offline Front end Single HTML file with inline CSS No build step, mobile-responsive, field-ready The total dependency footprint is three npm packages: express , foundry-local-sdk , and better-sqlite3 . Architecture Overview The five-layer architecture, all running on a single machine. The system has five layers, all running on a single machine: Client layer: a single HTML file served by Express, with quick-action buttons and a responsive chat interface Server layer: Express.js starts immediately and serves the UI plus SSE status and chat endpoints RAG pipeline: the chat engine orchestrates retrieval and generation; the chunker handles TF-IDF vectorisation; the prompts module provides safety-first system instructions Data layer: SQLite stores document chunks and their TF-IDF vectors; documents live as .md files in the docs/ folder AI layer: Foundry Local runs Phi-3.5 Mini on CPU or NPU via native SDK bindings Building the Solution Step by Step Prerequisites You need two things installed on your machine: Node.js 20 or later: download from nodejs.org Foundry Local: Microsoft's on-device AI runtime: winget install Microsoft.FoundryLocal The SDK will automatically download the Phi-3.5 Mini model (approximately 2 GB) the first time you run the application. Getting the Code Running # Clone the repository git clone https://github.com/leestott/local-rag.git cd local-rag # Install dependencies npm install # Ingest the 20 gas engineering documents into the vector store npm run ingest # Start the server npm start Open http://127.0.0.1:3000 in your browser. You will see the status indicator whilst the model loads. Once the model is ready, the status changes to "Offline Ready" and you can start chatting. Desktop view Mobile view How the RAG Pipeline Works Let us trace what happens when a user asks: "How do I detect a gas leak?" The query flow from browser to model and back. 1 Documents are ingested and indexed When you run npm run ingest , every .md file in the docs/ folder is read, parsed (with optional YAML front-matter for title, category, and ID), split into overlapping chunks of approximately 200 tokens, and stored in SQLite with TF-IDF vectors. 2 Model is loaded via the SDK The Foundry Local SDK discovers the model in the local catalogue and loads it into memory. If the model is not already cached, it downloads it first (with progress streamed to the browser via SSE). 3 User sends a question The question arrives at the Express server. The chat engine converts it into a TF-IDF vector, uses an inverted index to find candidate chunks, and scores them using cosine similarity. The top 3 chunks are returned in under 1 ms. 4 Prompt is constructed The engine builds a messages array containing: the system prompt (with safety-first instructions), the retrieved chunks as context, the conversation history, and the user's question. 5 Model generates a grounded response The prompt is sent to the locally loaded model via the Foundry Local SDK's native chat client. The response streams back token by token through Server-Sent Events to the browser. Source references with relevance scores are included. A response with safety warnings and step-by-step guidance The sources panel shows which chunks were used and their relevance Key Code Walkthrough The Vector Store (TF-IDF + SQLite) The vector store uses SQLite to persist document chunks alongside their TF-IDF vectors. At query time, an inverted index finds candidate chunks that share terms with the query, then cosine similarity ranks them: // src/vectorStore.js search(query, topK = 5) { const queryTf = termFrequency(query); this._ensureCache(); // Build in-memory cache on first access // Use inverted index to find candidates sharing at least one term const candidateIndices = new Set(); for (const term of queryTf.keys()) { const indices = this._invertedIndex.get(term); if (indices) { for (const idx of indices) candidateIndices.add(idx); } } // Score only candidates, not all rows const scored = []; for (const idx of candidateIndices) { const row = this._rowCache[idx]; const score = cosineSimilarity(queryTf, row.tf); if (score > 0) scored.push({ ...row, score }); } scored.sort((a, b) => b.score - a.score); return scored.slice(0, topK); } The inverted index, in-memory row cache, and prepared SQL statements bring retrieval time to sub-millisecond for typical query loads. Why TF-IDF Instead of Embeddings? Most RAG tutorials use embedding models for retrieval. This project uses TF-IDF because: Fully offline: no embedding model to download or run Zero latency: vectorisation is instantaneous (it is just maths on word frequencies) Good enough: for 20 domain-specific documents, TF-IDF retrieves the right chunks reliably Transparent: you can inspect the vocabulary and weights, unlike neural embeddings For larger collections or when semantic similarity matters more than keyword overlap, you would swap in an embedding model. For this use case, TF-IDF keeps the stack simple and dependency-free. The System Prompt For safety-critical domains, the system prompt is engineered to prioritise safety, prevent hallucination, and enforce structured responses: // src/prompts.js export const SYSTEM_PROMPT = `You are a local, offline support agent for gas field inspection and maintenance engineers. Behaviour Rules: - Always prioritise safety. If a procedure involves risk, explicitly call it out. - Do not hallucinate procedures, measurements, or tolerances. - If the answer is not in the provided context, say: "This information is not available in the local knowledge base." Response Format: - Summary (1-2 lines) - Safety Warnings (if applicable) - Step-by-step Guidance - Reference (document name + section)`; This pattern is transferable to any safety-critical domain: medical devices, electrical work, aviation maintenance, or chemical handling. Runtime Document Upload Unlike the CAG approach, RAG supports adding documents without restarting the server. Click the upload button to add new .md or .txt files. They are chunked, vectorised, and indexed immediately. The upload modal with the complete list of indexed documents. Adapting This for Your Own Domain The sample project is designed to be forked and adapted. Here is how to make it yours in four steps: 1. Replace the documents Delete the gas engineering documents in docs/ and add your own markdown files. The ingestion pipeline handles any markdown content with optional YAML front-matter: --- title: Troubleshooting Widget Errors category: Support id: KB-001 --- # Troubleshooting Widget Errors ...your content here... 2. Edit the system prompt Open src/prompts.js and rewrite the system prompt for your domain. Keep the structure (summary, safety, steps, reference) and update the language to match your users' expectations. 3. Tune the retrieval In src/config.js : chunkSize: 200 : smaller chunks give more precise retrieval, less context per chunk chunkOverlap: 25 : prevents information falling between chunks topK: 3 : how many chunks to retrieve per query (more gives more context but slower generation) 4. Swap the model Change config.model in src/config.js to any model available in the Foundry Local catalogue. Smaller models give faster responses on constrained devices; larger models give better quality. Building a Field-Ready UI The front end is a single HTML file with inline CSS. No React, no build tooling, no bundler. This keeps the project accessible to beginners and easy to deploy. Design decisions that matter for field use: Dark, high-contrast theme with 18px base font size for readability in bright sunlight Large touch targets (minimum 44px) for operation with gloves or PPE Quick-action buttons that wrap on mobile so all options are visible without scrolling Responsive layout that works from 320px to 1920px+ screen widths Streaming responses via SSE, so the user sees tokens arriving in real time The mobile chat experience, optimised for field use. Testing The project includes unit tests using the built-in Node.js test runner, with no extra test framework needed: # Run all tests npm test Tests cover the chunker, vector store, configuration, and server endpoints. Use them as a starting point when you adapt the project for your own domain. Ideas for Extending the Project Once you have the basics running, there are plenty of directions to explore: Embedding-based retrieval: use a local embedding model for better semantic matching on diverse queries Conversation memory: persist chat history across sessions using local storage or a lightweight database Multi-modal support: add image-based queries (photographing a fault code, for example) PWA packaging: make it installable as a standalone offline application on mobile devices Hybrid retrieval: combine TF-IDF keyword search with semantic embeddings for best results Try the CAG approach: compare with the local-cag sample to see which pattern suits your use case Ready to Build Your Own? Clone the RAG sample, swap in your own documents, and have an offline AI agent running in minutes. Or compare it with the CAG approach to see which pattern suits your use case best. Get the RAG Sample Get the CAG Sample Summary Building a local RAG application does not require a PhD in machine learning or a cloud budget. With Foundry Local, Node.js, and SQLite, you can create a fully offline, mobile-responsive AI agent that answers questions grounded in your own documents. The key takeaways: RAG is ideal for scalable, dynamic document sets where you need fine-grained retrieval with source attribution. Documents can be added at runtime without restarting. CAG is simpler when you have a small, stable set of documents that fit in the context window. See the local-cag sample to compare. Foundry Local makes on-device AI accessible: native SDK bindings, in-process inference, automatic model selection, and no GPU required. TF-IDF + SQLite is a viable vector store for small-to-medium collections, with sub-millisecond retrieval thanks to inverted indexing and in-memory caching. Start simple, iterate outwards. Begin with RAG and a handful of documents. If your needs are simpler, try CAG. Both patterns run entirely offline. Clone the repository, swap in your own documents, and start building. The best way to learn is to get your hands on the code. This project is open source under the MIT licence. It is a scenario sample for learning and experimentation, not production medical or safety advice. local-rag on GitHub · local-cag on GitHub · Foundry Local2.7KViews2likes0CommentsGetting Started with Foundry Local: A Student Guide to the Microsoft Foundry Local Lab
If you want to start building AI applications on your own machine, the Microsoft Foundry Local Lab is one of the most useful places to begin. It is a practical workshop that takes you from first-time setup through to agents, retrieval, evaluation, speech transcription, tool calling, and a browser-based interface. The material is hands-on, cross-language, and designed to show how modern AI apps can run locally rather than depending on a cloud service for every step. This blog post is aimed at students, self-taught developers, and anyone learning how AI applications are put together in practice. Instead of treating large language models as a black box, the lab shows you how to install and manage local models, connect to them with code, structure tasks into workflows, and test whether the results are actually good enough. If you have been looking for a learning path that feels more like building real software and less like copying isolated snippets, this workshop is a strong starting point. What Is Foundry Local? Foundry Local is a local runtime for downloading, managing, and serving AI models on your own hardware. It exposes an OpenAI-compatible interface, which means you can work with familiar SDK patterns while keeping execution on your device. For learners, that matters for three reasons. First, it lowers the barrier to experimentation because you can run projects without setting up a cloud account for every test. Second, it helps you understand the moving parts behind AI applications, including model lifecycle, local inference, and application architecture. Third, it encourages privacy-aware development because the examples are designed to keep data on the machine wherever possible. The Foundry Local Lab uses that local-first approach to teach the full journey from simple prompts to multi-agent systems. It includes examples in Python, JavaScript, and C#, so you can follow the language that fits your course, your existing skills, or the platform you want to build on. Why This Lab Works Well for Learners A lot of AI tutorials stop at the moment a model replies to a prompt. That is useful for a first demo, but it does not teach you how to build a proper application. The Foundry Local Lab goes further. It is organised as a sequence of parts, each one adding a new idea and giving you working code to explore. You do not just ask a model to respond. You learn how to manage the service, choose a language SDK, construct retrieval pipelines, build agents, evaluate outputs, and expose the result through a usable interface. That sequence is especially helpful for students because the parts build on each other. Early labs focus on confidence and setup. Middle labs focus on architecture and patterns. Later labs move into more advanced ideas that are common in real projects, such as tool calling, evaluation, and custom model packaging. By the end, you have seen not just what a local AI app looks like, but how its different layers fit together. Before You Start The workshop expects a reasonably modern machine and at least one programming language environment. The core prerequisites are straightforward: install Foundry Local, clone the repository, and choose whether you want to work in Python, JavaScript, or C#. You do not need to master all three. In fact, most learners will get more value by picking one language first, completing the full path in that language, and only then comparing how the same patterns look elsewhere. If you are new to AI development, do not be put off by the number of parts. The early sections are accessible, and the later ones become much easier once you have completed the foundations. Think of the lab as a structured course rather than a single tutorial. What You Learn in Each Lab https://github.com/microsoft-foundry/foundry-local-lab Part 1: Getting Started with Foundry Local The first part introduces the basics of Foundry Local and gets you up and running. You learn how to install the CLI, inspect the model catalogue, download a model, and run it locally. This part also introduces practical details such as model aliases and dynamic service ports, which are small but important pieces of real development work. For students, the value of this part is confidence. You prove that local inference works on your machine, you see how the service behaves, and you learn the operational basics before writing any application code. By the end of Part 1, you should understand what Foundry Local does, how to start it, and how local model serving fits into an application workflow. Part 2: Foundry Local SDK Deep Dive Once the CLI makes sense, the workshop moves into the SDK. This part explains why application developers often use the SDK instead of relying only on terminal commands. You learn how to manage the service programmatically, browse available models, control model download and loading, and understand model metadata such as aliases and hardware-aware selection. This is where learners start to move from using a tool to building with a platform. You begin to see the difference between running a model manually and integrating it into software. By the end of this section, you should understand the API surface you will use in your own projects and know how to bootstrap the SDK in Python, JavaScript, or C#. Part 3: SDKs and APIs Part 3 turns the SDK concepts into a working chat application. You connect code to the local inference server and use the OpenAI-compatible API for streaming chat completions. The lab includes examples in all three supported languages, which makes it especially useful if you are comparing ecosystems or learning how the same idea is expressed through different syntax and libraries. The key learning outcome here is not just that you can get a response from a model. It is that you understand the boundary between your application and the local model service. You learn how messages are structured, how streaming works, and how to write the sort of integration code that becomes the foundation for every later lab. Part 4: Retrieval-Augmented Generation This is where the workshop starts to feel like modern AI engineering rather than basic prompting. In the retrieval-augmented generation lab, you build a simple RAG pipeline that grounds answers in supplied data. You work with an in-memory knowledge base, apply retrieval logic, score matches, and compose prompts that include grounded context. For learners, this part is important because it demonstrates a core truth of AI app development: a model on its own is often not enough. Useful applications usually need access to documents, notes, or structured information. By the end of Part 4, you understand why retrieval matters, how to pass retrieved context into a prompt, and how a pipeline can make answers more relevant and reliable. Part 5: Building AI Agents Part 5 introduces the concept of an agent. Instead of a one-off prompt and response, you begin to define behaviour through system instructions, roles, and conversation state. The lab uses the ChatAgent pattern and the Microsoft Agent Framework to show how an agent can maintain a purpose, respond with a persona, and return structured output such as JSON. This part helps learners understand the difference between a raw model call and a reusable application component. You learn how to design instructions that shape behaviour, how multi-turn interaction differs from single prompts, and why structured output matters when an AI component has to work inside a broader system. Part 6: Multi-Agent Workflows Once a single agent makes sense, the workshop expands the idea into a multi-agent workflow. The example pipeline uses roles such as researcher, writer, and editor, with outputs passed from one stage to the next. You explore sequential orchestration, shared configuration, and feedback loops between specialised components. For students, this lab is a very clear introduction to decomposition. Instead of asking one model to do everything at once, you break a task into smaller responsibilities. That pattern is useful well beyond AI. By the end of Part 6, you should understand why teams build multi-agent systems, how hand-offs are structured, and what trade-offs appear when more components are added to a workflow. Part 7: Zava Creative Writer Capstone Application The Zava Creative Writer is the capstone project that brings the earlier ideas together into a more production-style application. It uses multiple specialised agents, structured JSON hand-offs, product catalogue search, streaming output, and evaluation-style feedback loops. Rather than showing an isolated feature, this part shows how separate patterns combine into a complete system. This is one of the most valuable parts of the workshop for learner developers because it narrows the gap between tutorial code and real application design. You can see how orchestration, agent roles, and practical interfaces fit together. By the end of Part 7, you should be able to recognise the architecture of a serious local AI app and understand how the earlier labs support it. Part 8: Evaluation-Led Development Many beginner AI projects stop once the output looks good once or twice. This lab teaches a much stronger habit: evaluation-led development. You work with golden datasets, rule-based checks, and LLM-as-judge scoring to compare prompt or agent variants systematically. The goal is to move from anecdotal testing to repeatable assessment. This matters enormously for students because evaluation is one of the clearest differences between a classroom demo and dependable software. By the end of Part 8, you should understand how to define success criteria, compare outputs at scale, and use evidence rather than intuition when improving an AI component. Part 9: Voice Transcription with Whisper Part 9 broadens the workshop beyond text generation by introducing speech-to-text with Whisper running locally. You use the Foundry Local SDK to download and load the model, then transcribe local audio files through the compatible API surface. The emphasis is on privacy-first processing, with audio kept on-device. This section is a useful reminder that local AI development is not limited to chatbots. Learners see how a different modality fits into the same ecosystem and how local execution supports sensitive workloads. By the end of this lab, you should understand the transcription flow, the relevant client methods, and how speech features can be integrated into broader applications. Part 10: Using Custom or Hugging Face Models After learning the standard path, the workshop shows how to work with custom or Hugging Face models. This includes compiling models into optimised ONNX format with ONNX Runtime GenAI, choosing hardware-specific options, applying quantisation strategies, creating configuration files, and adding compiled models to the Foundry Local cache. For learner developers, this part opens the door to model engineering rather than simple model consumption. You begin to understand that model choice, optimisation, and packaging affect performance and usability. By the end of Part 10, you should have a clearer picture of how models move from an external source into a runnable local setup and why deployment format matters. Part 11: Tool Calling with Local Models Tool calling is one of the most practical patterns in current AI development, and this lab covers it directly. You define tool schemas, allow the model to request function calls, handle the multi-turn interaction loop, execute the tools locally, and return results back to the model. The examples include practical scenarios such as weather and population tools. This lab teaches learners how to move beyond generation into action. A model is no longer limited to producing text. It can decide when external data or a function is needed and incorporate that result into a useful answer. By the end of Part 11, you should understand the tool-calling flow and how AI systems connect reasoning with deterministic software behaviour. Part 12: Building a Web UI for the Zava Creative Writer Part 12 adds a browser-based front end to the capstone application. You learn how to serve a shared interface from Python, JavaScript, or C#, stream updates to the browser, consume NDJSON with the Fetch API and ReadableStream, and show live agent status as content is produced in real time. This part is especially good for students who want to build portfolio projects. It turns backend orchestration into something visible and interactive. By the end of Part 12, you should understand how to connect a local AI backend to a web interface and how streaming changes the user experience compared with waiting for one final response. Part 13: Workshop Complete The final part is a summary and extension point. It reviews what you have built across the previous sections and suggests ways to continue. Although it is not a new technical lab in the same way as the earlier parts, it plays an important role in learning. It helps you consolidate the architecture, the terminology, and the development patterns you have encountered. For learners, reflection matters. By the end of Part 13, you should be able to describe the full stack of a local AI application, from model management to user interface, and identify which area you want to deepen next. What Students Gain from the Full Workshop Taken together, these labs do more than teach Foundry Local itself. They teach how AI applications are built. You learn operational basics such as model setup and service management. You learn application integration through SDKs and APIs. You learn system design through RAG, agents, multi-agent orchestration, and web interfaces. You learn engineering discipline through evaluation. You also see how text, speech, custom models, and tool calling all fit into one local-first development workflow. That breadth makes the workshop useful in several settings. A student can use it as a self-study path. A lecturer can use it as source material for practical sessions. A learner developer can use it to build portfolio pieces and to understand which AI patterns are worth learning next. Because the repository includes Python, JavaScript, and C#, it also works well for comparing how architectural ideas transfer across languages. How to Approach the Lab as a Beginner If you are starting from scratch, the best route is simple. Complete Parts 1 to 3 in your preferred language first. That gives you the essential setup and integration skills. Then move into Parts 4 to 6 to understand how AI application patterns are composed. After that, use Parts 7 and 8 to learn how larger systems and evaluation fit together. Finally, explore Parts 9 to 12 based on your interests, whether that is speech, tooling, model customisation, or front-end work. It is also worth keeping notes as you go. Record what each part adds to your understanding, what code files matter, and what assumptions each example makes. That habit will help you move from following the labs to adapting the patterns in your own projects. Final Thoughts The Microsoft Foundry Local Lab is a strong introduction to local AI development because it treats learners like developers rather than spectators. You install, run, connect, orchestrate, evaluate, and present working systems. That makes it far more valuable than a short demo that only proves a model can answer a question. If you are a student or learner developer who wants to understand how AI applications are really built, this lab gives you a clear path. Start with the basics, pick one language, and work through the parts in order. By the time you finish, you will not just have used Foundry Local. You will have a practical foundation for building local AI applications with far more confidence and much better judgement.517Views0likes0CommentsBuilding an Offline AI Interview Coach with Foundry Local, RAG, and SQLite
How to build a 100% offline, AI-powered interview preparation tool using Microsoft Foundry Local, Retrieval-Augmented Generation, and nothing but JavaScript. Foundry Local 100% Offline RAG + TF-IDF JavaScript / Node.js Contents Introduction What is RAG and Why Offline? Architecture Overview Setting Up Foundry Local Building the RAG Pipeline The Chat Engine Dual Interfaces: Web & CLI Testing Adapting for Your Own Use Case What I Learned Getting Started Introduction Imagine preparing for a job interview with an AI assistant that knows your CV inside and out, understands the job you're applying for, and generates tailored questions, all without ever sending your data to the cloud. That's exactly what Interview Doctor does. Interview Doctor's web UI, a polished, dark-themed interface running entirely on your local machine. In this post, I'll walk you through how I built an interview prep tool as a fully offline JavaScript application using: Foundry Local — Microsoft's on-device AI runtime SQLite — for storing document chunks and TF-IDF vectors RAG (Retrieval-Augmented Generation) — to ground the AI in your actual documents Express.js — for the web server Node.js built-in test runner — for testing with zero extra dependencies No cloud. No API keys. No internet required. Everything runs on your machine. What is RAG and Why Does It Matter? Retrieval-Augmented Generation (RAG) is a pattern that makes AI models dramatically more useful for domain-specific tasks. Instead of relying solely on what a model learned during training (which can be outdated or generic), RAG: Retrieves relevant chunks from your own documents Augments the model's prompt with those chunks as context Generates a response grounded in your actual data For Interview Doctor, this means the AI doesn't just ask generic interview questions, it asks questions specific to your CV, your experience, and the specific job you're applying for. Why Offline RAG? Privacy is the obvious benefit, your CV and job applications never leave your device. But there's more: No API costs — run as many queries as you want No rate limits — iterate rapidly during your prep Works anywhere — on a plane, in a café with bad Wi-Fi, anywhere Consistent performance — no cold starts, no API latency Architecture Overview Complete architecture showing all components and data flow. The application has two interfaces (CLI and Web) that share the same core engine: Document Ingestion — PDFs and markdown files are chunked and indexed Vector Store — SQLite stores chunks with TF-IDF vectors Retrieval — queries are matched against stored chunks using cosine similarity Generation — relevant chunks are injected into the prompt sent to the local LLM Step 1: Setting Up Foundry Local First, install Foundry Local: # Windows winget install Microsoft.FoundryLocal # macOS brew install microsoft/foundrylocal/foundrylocal The JavaScript SDK handles everything else — starting the service, downloading the model, and connecting: import { FoundryLocalManager } from "foundry-local-sdk"; import { OpenAI } from "openai"; const manager = new FoundryLocalManager(); const modelInfo = await manager.init("phi-3.5-mini"); // Foundry Local exposes an OpenAI-compatible API const openai = new OpenAI({ baseURL: manager.endpoint, // Dynamic port, discovered by SDK apiKey: manager.apiKey, }); ⚠️ Key Insight Foundry Local uses a dynamic port never hardcode localhost:5272 . Always use manager.endpoint which is discovered by the SDK at runtime. Step 2: Building the RAG Pipeline Document Chunking Documents are split into overlapping chunks of ~200 tokens. The overlap ensures important context isn't lost at chunk boundaries: export function chunkText(text, maxTokens = 200, overlapTokens = 25) { const words = text.split(/\s+/).filter(Boolean); if (words.length <= maxTokens) return [text.trim()]; const chunks = []; let start = 0; while (start < words.length) { const end = Math.min(start + maxTokens, words.length); chunks.push(words.slice(start, end).join(" ")); if (end >= words.length) break; start = end - overlapTokens; } return chunks; } Why 200 tokens with 25-token overlap? Small chunks keep retrieved context compact for the model's limited context window. Overlap prevents information loss at boundaries. And it's all pure string operations, no dependencies needed. TF-IDF Vectors Instead of using a separate embedding model (which would consume precious memory alongside the LLM), we use TF-IDF, a classic information retrieval technique: export function termFrequency(text) { const tf = new Map(); const tokens = text .toLowerCase() .replace(/[^a-z0-9\-']/g, " ") .split(/\s+/) .filter((t) => t.length > 1); for (const t of tokens) { tf.set(t, (tf.get(t) || 0) + 1); } return tf; } export function cosineSimilarity(a, b) { let dot = 0, normA = 0, normB = 0; for (const [term, freq] of a) { normA += freq * freq; if (b.has(term)) dot += freq * b.get(term); } for (const [, freq] of b) normB += freq * freq; if (normA === 0 || normB === 0) return 0; return dot / (Math.sqrt(normA) * Math.sqrt(normB)); } Each document chunk becomes a sparse vector of word frequencies. At query time, we compute cosine similarity between the query vector and all stored chunk vectors to find the most relevant matches. SQLite as a Vector Store Chunks and their TF-IDF vectors are stored in SQLite using sql.js (pure JavaScript — no native compilation needed): export class VectorStore { // Created via: const store = await VectorStore.create(dbPath) insert(docId, title, category, chunkIndex, content) { const tf = termFrequency(content); const tfJson = JSON.stringify([...tf]); this.db.run( "INSERT INTO chunks (...) VALUES (?, ?, ?, ?, ?, ?)", [docId, title, category, chunkIndex, content, tfJson] ); this.save(); } search(query, topK = 5) { const queryTf = termFrequency(query); // Score each chunk by cosine similarity, return top-K } } 💡 Why SQLite for Vectors? For a CV plus a few job descriptions (dozens of chunks), brute-force cosine similarity over SQLite rows is near-instant (~1ms). No need for Pinecone, Qdrant, or Chroma — just a single .db file on disk. Step 3: The RAG Chat Engine The chat engine ties retrieval and generation together: async *queryStream(userMessage, history = []) { // 1. Retrieve relevant CV/JD chunks const chunks = this.retrieve(userMessage); const context = this._buildContext(chunks); // 2. Build the prompt with retrieved context const messages = [ { role: "system", content: SYSTEM_PROMPT }, { role: "system", content: `Retrieved context:\n\n${context}` }, ...history, { role: "user", content: userMessage }, ]; // 3. Stream from the local model const stream = await this.openai.chat.completions.create({ model: this.modelId, messages, temperature: 0.3, stream: true, }); // 4. Yield chunks as they arrive for await (const chunk of stream) { const content = chunk.choices[0]?.delta?.content; if (content) yield { type: "text", data: content }; } } The flow is straightforward: vectorize the query, retrieve with cosine similarity, build a prompt with context, and stream from the local LLM. The temperature: 0.3 keeps responses focused — important for interview preparation where consistency matters. Step 4: Dual Interfaces — Web & CLI Web UI The web frontend is a single HTML file with inline CSS and JavaScript — no build step, no framework, no React or Vue. It communicates with the Express backend via REST and SSE: File upload via multipart/form-data Streaming chat via Server-Sent Events (SSE) Quick-action buttons for common follow-up queries (coaching tips, gap analysis, mock interview) The setup form with job title, seniority level, and a pasted job description — ready to generate tailored interview questions. CLI The CLI provides the same experience in the terminal with ANSI-coloured output: npm run cli It walks you through uploading your CV, entering the job details, and then generates streaming questions. Follow-up questions work interactively. Both interfaces share the same ChatEngine class, they're thin layers over identical logic. Edge Mode For constrained devices, toggle Edge mode to use a compact system prompt that fits within smaller context windows: Edge mode activated, uses a minimal prompt for devices with limited resources. Step 5: Testing Tests use the Node.js built-in test runner, no Jest, no Mocha, no extra dependencies: import { describe, it } from "node:test"; import assert from "node:assert/strict"; describe("chunkText", () => { it("returns single chunk for short text", () => { const chunks = chunkText("short text", 200, 25); assert.equal(chunks.length, 1); }); it("maintains overlap between chunks", () => { // Verifies overlapping tokens between consecutive chunks }); }); npm test Tests cover the chunker, vector store, config, prompts, and server API contract, all without needing Foundry Local running. Adapting for Your Own Use Case Interview Doctor is a pattern, not just a product. You can adapt it for any domain: What to Change How Domain documents Replace files in docs/ with your content System prompt Edit src/prompts.js Chunk sizes Adjust config.chunkSize and config.chunkOverlap Model Change config.model — run foundry model list UI Modify public/index.html — it's a single file Ideas for Adaptation Customer support bot — ingest your product docs and FAQs Code review assistant — ingest coding standards and best practices Study guide — ingest textbooks and lecture notes Compliance checker — ingest regulatory documents Onboarding assistant — ingest company handbooks and processes What I Learned Offline AI is production-ready. Foundry Local + small models like Phi-3.5 Mini are genuinely useful for focused tasks. You don't need vector databases for small collections. SQLite + TF-IDF is fast, simple, and has zero infrastructure overhead. RAG quality depends on chunking. Getting chunk sizes right for your use case is more impactful than the retrieval algorithm. The OpenAI-compatible API is a game-changer. Switching from cloud to local was mostly just changing the baseURL . Dual interfaces are easy when you share the engine. The CLI and Web UI are thin layers over the same ChatEngine class. ⚡ Performance Notes On a typical laptop (no GPU): ingestion takes under 1 second for ~20 documents, retrieval is ~1ms, and the first LLM token arrives in 2-5 seconds. Foundry Local automatically selects the best model variant for your hardware (CUDA GPU, NPU, or CPU). Getting Started git clone https://github.com/leestott/interview-doctor-js.git cd interview-doctor-js npm install npm run ingest npm start # Web UI at http://127.0.0.1:3000 # or npm run cli # Interactive terminal The full source code is on GitHub. Star it, fork it, adapt it — and good luck with your interviews! Resources Foundry Local — Microsoft's on-device AI runtime Foundry Local SDK (npm) — JavaScript SDK Foundry Local GitHub — Source, samples, and documentation Local RAG Reference — Reference RAG implementation Interview Doctor (JavaScript) — This project's source codeBuild an Offline Hybrid RAG Stack with ONNX and Foundry Local
If you are building local AI applications, basic retrieval augmented generation is often only the starting point. This sample shows a more practical pattern: combine lexical retrieval, ONNX based semantic embeddings, and a Foundry Local chat model so the assistant stays grounded, remains offline, and degrades cleanly when the semantic path is unavailable. Why this sample is worth studying Many local RAG samples rely on a single retrieval strategy. That is usually enough for a proof of concept, but it breaks down quickly in production. Exact keywords, acronyms, and document codes behave differently from natural language questions and paraphrased requests. This repository keeps the original lexical retrieval path, adds local ONNX embeddings for semantic search, and fuses both signals in a hybrid ranking mode. The generation step runs through Foundry Local, so the entire assistant can remain on device. Lexical mode handles exact terms and structured vocabulary. Semantic mode handles paraphrases and more natural language phrasing. Hybrid mode combines both and is usually the best default. Lexical fallback protects the user experience if the embedding pipeline cannot start. Architectural overview The sample has two main flows: an offline ingestion pipeline and a local query pipeline. The architecture splits cleanly into offline ingestion at the top and runtime query handling at the bottom. Offline ingestion pipeline Read Markdown files from docs/ . Parse front matter and split each document into overlapping chunks. Generate dense embeddings when the ONNX model is available. Store chunks in SQLite with both sparse lexical features and optional dense vectors. Local query pipeline The browser posts a question to the Express API. ChatEngine resolves the requested retrieval mode. VectorStore retrieves lexical, semantic, or hybrid results. The prompt is assembled with the retrieved context and sent to a Foundry Local chat model. The answer is returned with source references and retrieval metadata. The sequence diagram shows the difference between lexical retrieval and hybrid retrieval. In hybrid mode, the query is embedded first, then lexical and semantic scores are fused before prompt assembly. Repository structure and core components The implementation is compact and readable. The main files to understand are listed below. src/config.js : retrieval defaults, paths, and model settings. src/embeddingEngine.js : local ONNX embedding generation through Transformers.js. src/vectorStore.js : SQLite storage plus lexical, semantic, and hybrid ranking. src/chatEngine.js : retrieval mode resolution, prompt assembly, and Foundry Local model execution. src/ingest.js : document ingestion and embedding generation during indexing. src/server.js : REST endpoints, streaming endpoints, upload support, and health reporting. Getting started To run the sample, you need Node.js 20 or newer, Foundry Local, and a local ONNX embedding model. The default model path is models/embeddings/bge-small-en-v1.5 . cd c:\Users\leestott\local-hybrid-retrival-onnx npm install huggingface-cli download BAAI/bge-small-en-v1.5 --local-dir models/embeddings/bge-small-en-v1.5 npm run ingest npm start Ingestion writes the local SQLite database to data/rag.db . If the embedding model is available, each chunk gets a dense vector as well as lexical features. If the embedding model is missing, ingestion still succeeds and the application remains usable in lexical mode. Best practice: local AI applications should treat model files, SQLite data, and native runtime compatibility as part of the deployable system, not as optional developer conveniences. Code walkthrough 1. Retrieval configuration The sample makes its retrieval behaviour explicit in configuration. That is useful for testing and for operator visibility. export const config = { model: "phi-3.5-mini", docsDir: path.join(ROOT, "docs"), dbPath: path.join(ROOT, "data", "rag.db"), chunkSize: 200, chunkOverlap: 25, topK: 3, retrievalMode: process.env.RETRIEVAL_MODE || "hybrid", retrievalModes: ["lexical", "semantic", "hybrid"], fallbackRetrievalMode: "lexical", retrievalWeights: { lexical: 0.45, semantic: 0.55, }, }; Those defaults tell you a lot about the intended operating profile. Chunks are small, the number of returned chunks is low, and the fallback path is explicit. 2. Local ONNX embeddings The embedding engine disables remote model loading and only uses local files. That matters for privacy, repeatability, and air gapped operation. env.allowLocalModels = true; env.allowRemoteModels = false; this.extractor = await pipeline("feature-extraction", resolvedPath, { local_files_only: true, }); const output = await this.extractor(text, { pooling: "mean", normalize: true, }); The mean pooling and normalisation step make the vectors suitable for cosine similarity based ranking. 3. Hybrid storage and ranking in SQLite Instead of adding a separate vector database, the sample stores lexical and semantic representations in the same SQLite table. That keeps the local footprint low and the implementation easy to debug. searchHybrid(query, queryEmbedding, topK = 5, weights = { lexical: 0.45, semantic: 0.55 }) { const lexicalResults = this.searchLexical(query, topK * 3); const semanticResults = this.searchSemantic(queryEmbedding, topK * 3); if (semanticResults.length === 0) { return lexicalResults.slice(0, topK).map((row) => ({ ...row, retrievalMode: "lexical", })); } const fused = [...combined.values()].map((row) => ({ ...row, score: (row.lexicalScore * lexicalWeight) + (row.semanticScore * semanticWeight), })); fused.sort((a, b) => b.score - a.score); return fused.slice(0, topK); } The important point is not just the weighted fusion. It is the fallback behaviour. If semantic retrieval cannot provide results, the user still gets lexical grounding instead of an empty context window. 4. Retrieval mode resolution in ChatEngine ChatEngine keeps the runtime behaviour predictable. It validates the requested mode and falls back to lexical search when semantic retrieval is unavailable. resolveRetrievalMode(requestedMode) { const desiredMode = config.retrievalModes.includes(requestedMode) ? requestedMode : config.retrievalMode; if ((desiredMode === "semantic" || desiredMode === "hybrid") && !this.semanticAvailable) { return config.fallbackRetrievalMode; } return desiredMode; } This is a sensible production design because local runtime failures are common. Missing model files or native dependency mismatches should reduce quality, not crash the entire assistant. 5. Foundry Local model management The sample uses FoundryLocalManager to discover, download, cache, and load the configured chat model. const manager = FoundryLocalManager.create({ appName: "gas-field-local-rag" }); const catalog = manager.catalog; this.model = await catalog.getModel(config.model); if (!this.model.isCached) { await this.model.download((progress) => { const pct = Math.round(progress * 100); this._emitStatus("download", `Downloading ${this.modelAlias}... ${pct}%`, progress); }); } await this.model.load(); this.chatClient = this.model.createChatClient(); this.chatClient.settings.temperature = 0.1; This gives the app a better local startup experience. The server can expose a status stream while the model initialises in the background. User experience and screenshots The client is intentionally simple, which makes it useful during evaluation. You can switch retrieval mode, test questions quickly, and inspect the retrieved sources. The landing page exposes retrieval mode directly in the UI. That makes it easy to compare lexical, semantic, and hybrid behaviour during testing. The sources panel shows grounding evidence and retrieval scores, which is useful when validating whether better answers are coming from better retrieval or just model phrasing. Best practices for ONNX RAG and Foundry Local Keep lexical fallback alive. Exact identifiers and runtime failures both make this necessary. Persist sparse and dense features together where possible. It simplifies debugging and operational reasoning. Use small chunks and conservative topK values for local context budgets. Expose health and status endpoints so users can see when the model is still loading or embeddings are unavailable. Test retrieval quality separately from generation quality. Pin and validate native runtime dependencies, especially ONNX Runtime, before tuning prompts. Practical warning: this repository already shows why runtime validation matters. A local app can ingest documents successfully and still fail at model initialisation if the native runtime stack is misaligned. How this compares with RAG and CAG The strongest value in this sample comes from where it sits between a basic local RAG baseline and a curated CAG design. Dimension Classic local RAG This hybrid ONNX RAG sample CAG Context assembly Retrieve chunks at query time, often lexically, then inject them into the prompt. Retrieve chunks at query time with lexical, semantic, or fused scoring, then inject the strongest results into the prompt. Use a prepared or cached context pack instead of fresh retrieval for every request. Main strength Easy to implement and easy to explain. Better recall for paraphrases without giving up exact match behaviour or offline execution. Predictable prompts and low query time overhead. Main weakness Misses synonyms and natural language reformulations. More moving parts, larger local asset footprint, and native runtime compatibility to manage. Coverage depends on curation quality and goes stale more easily. Failure behaviour Weak retrieval leads to weak grounding. Semantic failure can degrade to lexical retrieval if designed properly, which this sample does. Prepared context can be too narrow for new or unexpected questions. Best fit Simple local assistants and proof of concept systems. Offline copilots and technical assistants that need stronger recall across varied phrasing. Stable workflows with tightly bounded, curated knowledge. Samples Related samples: - Foundry Local RAG - https://github.com/leestott/local-rag - Foundry Local CAG - https://github.com/leestott/local-cag - Foundry Local hybrid-retrival-onnx https://github.com/leestott/local-hybrid-retrival-onnx Specific benefits of this hybrid approach over classic RAG It captures paraphrased questions that lexical search would often miss. It still preserves exact match performance for codes, terms, and product names. It gives operators a controlled degradation path when the semantic stack is unavailable. It stays local and inspectable without introducing a separate hosted vector service. Specific differences from CAG CAG shifts effort into context curation before the request. This sample retrieves evidence dynamically at runtime. CAG can be faster for fixed workflows, but it is usually less flexible when the document set changes. This hybrid RAG design is better suited to open ended knowledge search and growing document collections. What to validate before shipping Measure retrieval quality in each mode using exact term, acronym, and paraphrase queries. Check that sources shown in the UI reflect genuinely distinct evidence, not repeated chunks. Confirm the application remains usable when semantic retrieval is unavailable. Verify ONNX Runtime compatibility on the real target machines, not only on the development laptop. Test model download, cache, and startup behaviour with a clean environment. Final take For developers getting started with ONNX RAG and Foundry Local, this sample is a good technical reference because it demonstrates a realistic local architecture rather than a minimal demo. It shows how to build a grounded assistant that remains offline, supports multiple retrieval modes, and fails gracefully. Compared with classic local RAG, the hybrid design provides better recall and better resilience. Compared with CAG, it remains more flexible for changing document sets and less dependent on pre curated context packs. If you want a practical starting point for offline grounded AI on developer workstations or edge devices, this is the most balanced pattern in the repository set.307Views0likes0CommentsVectorless Reasoning-Based RAG: A New Approach to Retrieval-Augmented Generation
Introduction Retrieval-Augmented Generation (RAG) has become a widely adopted architecture for building AI applications that combine Large Language Models (LLMs) with external knowledge sources. Traditional RAG pipelines rely heavily on vector embeddings and similarity search to retrieve relevant documents. While this works well for many scenarios, it introduces challenges such as: Requires chunking documents into small segments Important context can be split across chunks Embedding generation and vector databases add infrastructure complexity A new paradigm called Vectorless Reasoning-Based RAG is emerging to address these challenges. One framework enabling this approach is PageIndex, an open-source document indexing system that organizes documents into a hierarchical tree structure and allows Large Language Models (LLMs) to perform reasoning-based retrieval over that structure. Vectorless Reasoning-Based RAG Instead of vectors, this approach uses structured document navigation. User Query ->Document Tree Structure ->LLM Reasoning ->Relevant Nodes Retrieved ->LLM Generates Answer This mimics how humans read documents: Look at the table of contents Identify relevant sections Read the relevant content Answer the question Core features No Vector Database: It relies on document structure and LLM reasoning for retrieval. It does not depend on vector similarity search. No Chunking: Documents are not split into artificial chunks. Instead, they are organized using their natural structure, such as pages and sections. Human-like Retrieval: The system mimics how human experts read documents. It navigates through sections and extracts information from relevant parts. Better Explainability and Traceability: Retrieval is based on reasoning. The results can be traced back to specific pages and sections. This makes the process easier to interpret. It avoids opaque and approximate vector search, often called “vibe retrieval.” When to Use Vectorless RAG Vectorless RAG works best when: Data is structured or semi-structured Documents have clear metadata Knowledge sources are well organized Queries require reasoning rather than semantic similarity Examples: enterprise knowledge bases internal documentation systems compliance and policy search healthcare documentation financial reporting Implementing Vectorless RAG with Azure AI Foundry Step 1 : Install Pageindex using pip command, from pageindex import PageIndexClient import pageindex.utils as utils # Get your PageIndex API key from https://dash.pageindex.ai/api-keys PAGEINDEX_API_KEY = "YOUR_PAGEINDEX_API_KEY" pi_client = PageIndexClient(api_key=PAGEINDEX_API_KEY) Step 2 : Set up your LLM Example using Azure OpenAI: from openai import AsyncAzureOpenAI client = AsyncAzureOpenAI( api_key=AZURE_OPENAI_API_KEY, azure_endpoint=AZURE_OPENAI_ENDPOINT, api_version=AZURE_OPENAI_API_VERSION ) async def call_llm(prompt, temperature=0): response = await client.chat.completions.create( model=AZURE_DEPLOYMENT_NAME, messages=[{"role": "user", "content": prompt}], temperature=temperature ) return response.choices[0].message.content.strip() Step 3: Page Tree Generation import os, requests pdf_url = "https://arxiv.org/pdf/2501.12948.pdf" //give the pdf url for tree generation, here given one for example pdf_path = os.path.join("../data", pdf_url.split('/')[-1]) os.makedirs(os.path.dirname(pdf_path), exist_ok=True) response = requests.get(pdf_url) with open(pdf_path, "wb") as f: f.write(response.content) print(f"Downloaded {pdf_url}") doc_id = pi_client.submit_document(pdf_path)["doc_id"] print('Document Submitted:', doc_id) Step 4 : Print the generated pageindex tree structure if pi_client.is_retrieval_ready(doc_id): tree = pi_client.get_tree(doc_id, node_summary=True)['result'] print('Simplified Tree Structure of the Document:') utils.print_tree(tree) else: print("Processing document, please try again later...") Step 5 : Use LLM for tree search and identify nodes that might contain relevant context import json query = "What are the conclusions in this document?" tree_without_text = utils.remove_fields(tree.copy(), fields=['text']) search_prompt = f""" You are given a question and a tree structure of a document. Each node contains a node id, node title, and a corresponding summary. Your task is to find all nodes that are likely to contain the answer to the question. Question: {query} Document tree structure: {json.dumps(tree_without_text, indent=2)} Please reply in the following JSON format: {{ "thinking": "<Your thinking process on which nodes are relevant to the question>", "node_list": ["node_id_1", "node_id_2", ..., "node_id_n"] }} Directly return the final JSON structure. Do not output anything else. """ tree_search_result = await call_llm(search_prompt) Step 6 : Print retrieved nodes and reasoning process node_map = utils.create_node_mapping(tree) tree_search_result_json = json.loads(tree_search_result) print('Reasoning Process:') utils.print_wrapped(tree_search_result_json['thinking']) print('\nRetrieved Nodes:') for node_id in tree_search_result_json["node_list"]: node = node_map[node_id] print(f"Node ID: {node['node_id']}\t Page: {node['page_index']}\t Title: {node['title']}") Step 7: Answer generation node_list = json.loads(tree_search_result)["node_list"] relevant_content = "\n\n".join(node_map[node_id]["text"] for node_id in node_list) print('Retrieved Context:\n') utils.print_wrapped(relevant_content[:1000] + '...') answer_prompt = f""" Answer the question based on the context: Question: {query} Context: {relevant_content} Provide a clear, concise answer based only on the context provided. """ print('Generated Answer:\n') answer = await call_llm(answer_prompt) utils.print_wrapped(answer) When to Use Each Approach Both vector-based RAG and vectorless RAG have their strengths. Choosing the right approach depends on the nature of the documents and the type of retrieval required. When to Use Vector Database–Based RAG Vector-based retrieval works best when dealing with large collections of unrelated or loosely structured documents. In such cases, semantic similarity is often sufficient to identify relevant information quickly. Use vector RAG when: Searching across many independent documents Semantic similarity is sufficient to locate relevant content Real-time retrieval is required over very large datasets Common use cases include: Customer support knowledge bases Conversational chatbots Product and content search systems When to Use Vectorless RAG Vectorless approaches such as PageIndex are better suited for long, structured documents where understanding the logical organization of the content is important. Use vectorless RAG when: Documents contain clear hierarchical structure Logical reasoning across sections is required High retrieval accuracy is critical Typical examples include: Financial filings and regulatory reports Legal documents and contracts Technical manuals and documentation Academic and research papers In these scenarios, navigating the document structure allows the system to identify the exact section that logically contains the answer, rather than relying only on semantic similarity. Conclusion Vector databases significantly advanced RAG architectures by enabling scalable semantic search across large datasets. However, they are not the optimal solution for every type of document. Vectorless approaches such as PageIndex introduce a different philosophy: instead of retrieving text that is merely semantically similar, they retrieve text that is logically relevant by reasoning over the structure of the document. As RAG architectures continue to evolve, the future will likely combine the strengths of both approaches. Hybrid systems that integrate vector search for broad retrieval and reasoning-based navigation for precision may offer the best balance of scalability and accuracy for enterprise AI applications.6.4KViews2likes0Comments