developer
8117 TopicsBuilding an Enterprise Knowledge Copilot with Foundry IQ and Agentic Retrieval on Azure AI
Every enterprise has the same problem: knowledge scattered across SharePoint, file shares, wikis, and email. This article walks through building a knowledge copilot that unifies that data behind a single conversational interface — using Microsoft's Foundry IQ knowledge bases and the agentic retrieval engine in Azure AI Search. The Problem: Fragmented Knowledge, Fragmented Answers Enterprise AI projects today share a common pain point. Each new agent or copilot that needs to answer questions from company data must rebuild its own retrieval pipeline from scratch — data connections, chunking logic, embeddings, routing, permissions — all duplicated project after project. The result is a tangle of fragmented, siloed pipelines that are expensive to maintain and inconsistent in quality. Consider a field technician troubleshooting equipment. The answer might span a vendor manual stored in OneLake, a company repair policy on SharePoint, and a public electrical standard on the web. Traditional single-index RAG cannot orchestrate across those sources in one pass. The technician waits, the issue escalates, and productivity drops. Foundry IQ, announced in public preview in November 2025, addresses this directly. It provides a unified knowledge layer for agents — a single endpoint that replaces per-project RAG pipelines with a reusable, topic-centric knowledge base that any number of agents can consume. What Is Foundry IQ? Foundry IQ introduces four capabilities built on top of Azure AI Search: Knowledge Bases — Reusable, topic-centric collections (e.g., "employee policies," "product documentation") available directly in the Foundry portal. Rather than wiring retrieval logic into every agent, you define a knowledge base once and ground multiple agents through a single API. Indexed and Federated Knowledge Sources — A knowledge base can draw from Azure Blob Storage, OneLake, SharePoint, Azure AI Search indexes, the web, and MCP servers (MCP in private preview). Developers do not need to manage different retrieval strategies per source; the knowledge base presents a unified endpoint. Agentic Retrieval Engine — A self-reflective query engine that uses AI to plan, search, and synthesize answers with configurable retrieval reasoning effort. Enterprise-Grade Security — Document-level access control and alignment with existing permissions models. Microsoft Purview sensitivity labels are respected through the indexing and retrieval pipeline, so classified content remains governed as it flows into knowledge bases. For indexed sources, Foundry IQ automatically manages the full indexing pipeline: content is ingested, chunked, vectorized, and prepared for hybrid retrieval. When Azure Content Understanding is enabled, complex documents gain layout-aware enrichment — tables, figures, and headers are extracted and structured without extra engineering work. How Agentic Retrieval Works Single-shot RAG — one query, one index, one pass — breaks down when questions are ambiguous, multi-hop, or span several data silos. Foundry IQ's agentic retrieval engine treats retrieval as a multi-step reasoning task rather than a keyword lookup: Plan — The engine analyzes the conversation and decomposes the query into focused sub-queries, deciding which knowledge sources to consult. Search — Sub-queries run concurrently against selected sources using keyword, vector, or hybrid techniques. Rank — Semantic reranking identifies the most relevant results. Reflect — If the information gathered is insufficient, the engine iterates — issuing follow-up queries autonomously. Synthesize — Results are unified into a natural-language answer with source references. Developers control this behaviour through a high-level retrieval reasoning effort setting. Lower effort suits fast, lightweight lookups; higher effort enables iterative search and richer planning across the entire data estate. Real-world impact: AT&T integrated Azure AI Search and retrieval-augmented generation into its multi-agent framework, reducing customer resolution times by 33 percent, cutting average handle time by nearly 10 percent, and scaling 71 AI solutions to 100,000 employees. Ontario Power Generation used agentic retrieval to sift through over 40 years of nuclear operating experience, enabling data-driven decision-making and helping new staff learn from decades of institutional knowledge. Architecture Overview Step-by-Step: Setting Up the Knowledge Copilot Provision Resources You need an Azure AI Search service (Basic tier or above), a Microsoft Foundry project, an embedding model deployment (e.g., text-embedding-3-large), and an LLM deployment (e.g., gpt-4.1) for query planning and answer generation. .NET 8 or later is required for the C# SDK. Create a Knowledge Base in Azure AI Search Using the Azure.Search.Documents preview SDK, define an index, a knowledge source pointing to your data, and a knowledge base with OutputMode set to AnswerSynthesis for natural-language answers with citations. The following C# snippet (adapted from the official Azure AI Search quickstart) shows the knowledge base creation: using Azure; using Azure.Identity; using Azure.Search.Documents.Indexes; var searchEndpoint = "https://<your-service>.search.windows.net"; var aoaiEndpoint = "https://<your-resource>.openai.azure.com/"; var indexClient = new SearchIndexClient( new Uri(searchEndpoint), new DefaultAzureCredential()); // Configure the LLM for query planning and answer synthesis var openAiParameters = new AzureOpenAIVectorizerParameters { ResourceUri = new Uri(aoaiEndpoint), DeploymentName = "gpt-4.1", ModelName = "gpt-4.1" }; var model = new KnowledgeBaseAzureOpenAIModel(openAiParameters); // Create the knowledge base with answer synthesis enabled var knowledgeBase = new KnowledgeBase("<knowledge-base-name>") { OutputMode = KnowledgeBaseOutputMode.AnswerSynthesis, AnswerInstructions = "Provide a concise answer based on the retrieved documents.", Models = { model } }; await indexClient.CreateOrUpdateKnowledgeBaseAsync(knowledgeBase); Connect an Agent to the Knowledge Base via MCP Each knowledge base exposes a Model Context Protocol (MCP) endpoint that MCP-compatible agents can call. The Foundry IQ-specific agent SDK currently offers full code samples for Python and REST API, but you can use the general-purpose MCP tooling in C# to achieve the same connection. The following pattern is drawn from the official Microsoft Learn documentation on MCP tools with Foundry Agents: using Azure.AI.Projects; using Azure.Identity; var endpoint = "https://<your-resource>.services.ai.azure.com/api/projects/<your-project>"; var model = "gpt-4.1-mini"; // Point the MCP tool at the knowledge base's MCP endpoint var mcpTool = new MCPToolDefinition( serverLabel: "enterprise_kb", serverUrl: "https://<search-service>.search.windows.net" + "/knowledgebases/<kb-name>/mcp?api-version=2025-11-01-preview"); mcpTool.AllowedTools.Add("knowledge_base_retrieve"); // Create the agent with the MCP tool attached var projectClient = new AIProjectClient(new Uri(endpoint), new DefaultAzureCredential()); var agentVersion = await projectClient.AgentAdministrationClient .CreateAgentVersionAsync( "enterprise-copilot", new ProjectsAgentVersionCreationOptions( new DeclarativeAgentDefinition(model) { Instructions = "You are a company knowledge assistant. " + "Always search the knowledge base before answering. " + "If the knowledge base has no answer, say so clearly.", Tools = { mcpTool } })); The agent instructions are critical — explicitly requiring the agent to use the knowledge base prevents it from answering purely from the LLM's training data. Query the Copilot Once the agent is published, your application layer simply sends user questions via the Azure AI Projects SDK or REST API. The agent autonomously invokes the knowledge base tool, retrieves grounded context, and returns an answer with citations referencing the original documents. Trade-offs and Considerations Dimension Detail Maturity Foundry IQ is in public preview — not recommended for production workloads without accepting preview SLA terms. Cost Agentic retrieval has two billing streams: token-based billing from Azure AI Search for retrieval, and billing from Azure OpenAI for query planning and answer synthesis. Latency vs. Quality Higher retrieval reasoning effort produces better answers but adds latency due to iterative search. For sub-second lookups, use minimal effort; for complex multi-hop questions, use medium. C# SDK Coverage The Foundry IQ–specific agent connection SDK currently supports Python and REST API. C# support is available for the underlying agentic retrieval queries and for general MCP tool integration. Security Document-level ACLs from SharePoint are enforced at query time. For per-user authorization in Foundry Agent Service, the current preview does not support per-request MCP headers — use the Azure OpenAI Responses API as an alternative. Key Takeaways Foundry IQ transforms enterprise RAG from a bespoke, per-project exercise into a managed, reusable knowledge layer. You define a knowledge base once, connect it to your data sources, and any number of agents or apps can consume it. The agentic retrieval engine handles query planning, multi-source search, semantic reranking, and iterative refinement — capabilities that previously required significant custom engineering. For .NET developers, the Azure AI Search C# SDK and the MCP tooling in the Agent Framework provide the building blocks to integrate this into your applications today. References: What is Foundry IQ? Create a knowledge base in Azure AI Search Foundry IQ: Unlocking ubiquitous knowledge for agentsPartner perspective: How Breakthru uses App Advisor and AI-listing optimization to drive growth
Optimizing a Marketplace listing isn’t just a marketing exercise—it directly impacts discoverability, demand, and revenue. But knowing what to change (and when) can be challenging for software development companies. In this partner‑written blog post, Marketplace software development company Breakthru shares firsthand experience using AI‑powered listing recommendations in App Advisor to move from guesswork to confident, data‑driven optimization—without risking listing performance. Dan Langille also reflects on how App Advisor became a core part of their business, what’s working in practice, and how AI is changing how teams iterate on their Marketplace presence. 👉 Read the partner story here: Improve Marketplace outcomes with AI‑powered listing recommendations in App Advisor Discussion prompts for the community: Would AI‑driven recommendations change how often you iterate on your listing? Have you used App Advisor for selling and growing app and AI agent sales? Curious to hear how other Marketplace partners are approaching listing optimization today!Unable to remove old account sing up form Teams on Mac OS
I cannot remove an old sing account from Teams on Mac OS. I have to delete the keychain, I delete the temp folder, and remove and reinstall Teams, and the old account is still there. I have a case open with the Microsoft team, and they cannot figure it out. Where is the login info store, and why has Microsoft not provided a solution where users can remove accounts as easily as they can add? Please help!33KViews2likes41CommentsAccess fixes released in Version 2604 (Build 16.0.19929.20090)
Bug Name Issue Fixed Values display in the wrong control when using a form as a sublist When a form was used as a sublist (subdatasheet), field values could display in the wrong control, showing data in incorrect positions. Values now display in the correct controls. Applications that use the Access Database Engine (ACEOLEDB) terminate unexpectedly on exit Third-party applications using the Access Database Engine (ACEOLEDB) provider could terminate unexpectedly when closing. The shutdown sequence has been corrected. Long Text field corrupted when a query updates a record while a user is editing it When a query updated a Long Text field on a record that was simultaneously open for editing, the field data could become corrupted. The record update now correctly handles concurrent access to Long Text fields. Rendering errors with Aptos (Detail) font Controls using the Aptos (Detail) font variant could render incorrectly, with characters appearing misaligned or garbled. The font rendering has been corrected. Standard colors in Access didn't match other Office apps The standard color palette in Access used different color values than other Office applications like Word and Excel. The color palette has been updated to match the rest of Office. Option Group with Vertical Anchor Bottom: option buttons show incorrect visual state after clicking When an option group control had its Vertical Anchor property set to Bottom, clicking an option button would not correctly update the visual state of the buttons. The visual state now updates correctly regardless of the anchor setting. Query Design: Insert/Delete Columns don't work when ribbon is set to Show Tabs Only In Query Design view, the Insert Columns and Delete Columns commands on the ribbon did not work when the ribbon display option was set to "Show Tabs Only." The commands now work correctly regardless of ribbon display mode. SQL View: Ctrl+K should toggle pretty formatting off/on In the Monaco SQL editor, the Ctrl+K keyboard shortcut did not toggle SQL formatting. Ctrl+K now correctly toggles pretty formatting on and off. Monaco editor incorrectly converts Unicode characters in SQL view When switching between Design View and SQL View, the Monaco SQL editor could incorrectly convert certain Unicode characters, corrupting the SQL text. Unicode characters are now preserved correctly. Importing text files with Unicode characters in the filename fails Attempting to import a text file whose filename contained certain Unicode characters would fail. File imports now handle Unicode filenames correctly. Added VarP and StDevP to the Totals query aggregate dropdown The VarP (population variance) and StDevP (population standard deviation) aggregate functions were missing from the Totals row dropdown in Query Design view. They have been added alongside the existing Var and StDev options. Added VarP and StDevP to the datasheet totals row dropdown The VarP and StDevP aggregate functions were missing from the Totals row dropdown in Datasheet view. They have been added to match the options available in Query Design view. Access hangs at shutdown when VBA holds temporary DAO field references Access could hang during shutdown when VBA code created temporary DAO field references. The shutdown process now correctly cleans up temporary field references. Full Screen Mode ribbon display option does nothing in Access Selecting "Full Screen Mode" from the ribbon display options had no effect in Access. This option now works correctly, hiding the ribbon to maximize the available workspace.746Views2likes5CommentsThe Future of Agentic AI: Inside Microsoft Agent Framework 1.0
Agentic AI is rapidly moving beyond demos and chatbots toward long‑running, autonomous systems that reason, call tools, collaborate with other agents, and operate reliably in production. On April 3, 2026, Microsoft marked a major milestone with the General Availability (GA) release of Microsoft Agent Framework 1.0, a production‑ready, open‑source framework for building agents and multi‑agent workflows in.NET and Python. [techcommun...rosoft.com] In this post, we’ll deep‑dive into: What Microsoft Agent Framework actually is Its core architecture and design principles What’s new in version 1.0 How it differs from other agent frameworks When and how to use it—with real code examples What Is Microsoft Agent Framework? According to the official announcement, Microsoft Agent Framework is an open‑source SDK and runtime for building AI agents and multi‑agent workflows with strong enterprise foundations. Agent Framework provides two primary capability categories: 1. Agents Agents are long‑lived runtime components that: Use LLMs to interpret inputs Call tools and MCP servers Maintain session state Generate responses They are not just prompt wrappers, but stateful execution units. 2. Workflows Workflows are graph‑based orchestration engines that: Connect agents and functions Enforce execution order Support checkpointing and human‑in‑the‑loop scenarios This leads to a clean separation of responsibilities: Concern Handled By Reasoning & interpretation Agent Execution policy & control flow Workflow This separation is a foundational design decision. High‑Level Architecture From the official overview, Agent Framework is composed of several core building blocks: Model clients (chat completions & responses) Agent sessions (state & conversation management) Context providers (memory and retrieval) Middleware pipeline (interception, filtering, telemetry) MCP clients (tool discovery and invocation) Workflow engine (graph‑based orchestration) Conceptual Flow 🌟 What’s New in Version 1.0 Version 1.0 marks the transition from "Release Candidate" to "General Availability" (GA). Production-Ready Stability: Unlike the earlier experimental packages, 1.0 offers stable APIs, versioned releases, and a commitment to long-term support (LTS). A2A Protocol (Agent-to-Agent): A new structured messaging protocol that allows agents to communicate across different runtimes. For example, an agent built in Python can seamlessly coordinate with an agent running in a .NET environment. MCP (Model Context Protocol) Support: Full integration with the Model Context Protocol, enabling agents to dynamically discover and invoke external tools and data sources without manual integration code. Multi-Agent Orchestration Patterns: Stable implementations of complex patterns, including: Sequential: Linear handoffs between specialized agents. Group Chat: Collaborative reasoning where agents discuss and solve problems. Magentic-One: A sophisticated pattern for task-oriented reasoning and planning. Middleware Pipeline: The new middleware architecture lets you inject logic into the agent's execution loop without modifying the core prompts. This is essential for Responsible AI (RAI), allowing you to add content safety filters, logging, and compliance checks globally. DevUI Debugger: A browser-based local debugger that provides a real-time visual representation of agent message flows, tool calls, and state changes. Code Examples Creating a Simple Agent (C#) From Microsoft Learn : using Azure.AI.Projects; using Azure.Identity; using Microsoft.Agents.AI; AIAgent agent = new AIProjectClient( new Uri("https://your-foundry-service.services.ai.azure.com/api/projects/your-project"), new AzureCliCredential()) .AsAIAgent( model: "gpt-5.4-mini", instructions: "You are a friendly assistant. Keep your answers brief."); Console.WriteLine(await agent.RunAsync("What is the largest city in France?")); This shows: Provider‑agnostic model access Session‑aware agent execution Minimal setup for production agents Creating a Simple Agent (Python) from agent_framework.foundry import FoundryChatClient from azure.identity import AzureCliCredential client = FoundryChatClient( project_endpoint="https://your-foundry-service.services.ai.azure.com/api/projects/your-project", model="gpt-5.4-mini", credential=AzureCliCredential(), ) agent = client.as_agent( name="HelloAgent", instructions="You are a friendly assistant. Keep your answers brief.", ) result = await agent.run("What is the largest city in France?") print(result) The same agent abstraction applies across languages. When to Use Agents vs Workflows Microsoft provides clear guidance: Use an Agent when… Use a Workflow when… Task is open‑ended Steps are well‑defined Autonomous tool use is needed Execution order matters Single decision point Multiple agents/functions collaborate Key principle: If you can solve the task with deterministic code, do that instead of using an AI agent. 🔄 How It Differs from Other Frameworks Microsoft Agent Framework 1.0 distinguishes itself by focusing on "Enterprise Readiness" and "Interoperability." Feature Microsoft Agent Framework 1.0 Semantic Kernel / AutoGen LangChain / CrewAI Philosophy Unified, production-ready SDK. Research-focused or tool-specific. High-level, developer-friendly abstractions. Integration Deeply integrated with Microsoft Foundry and Azure. Varied; often requires more glue code. Generally cloud-agnostic. Interoperability Native A2A and MCP for cross-framework tasks. Limited to internal ecosystem. Uses proprietary connectors. Runtime Identical API parity for .NET and Python. Primarily Python-first (SK has C#). Primarily Python. Control Graph-based deterministic workflows. More non-deterministic/experimental. Mixture of role-based and agentic. 🛠️ Key Technical Components Agent Harness: The execution layer that provides agents with controlled access to the shell, file system, and messaging loops. Agent Skills: A portable, file-based or code-defined format for packaging domain expertise. Implementation Tip: If you are coming from Semantic Kernel, Microsoft provides migration assistants that analyze your existing code and generate step-by-step plans to upgrade to the new Agent Framework 1.0 standards. Microsoft Agent Framework Version 1.0 | Microsoft Agent Framework Agent Framework documentation 🎯 Summary Microsoft Agent Framework 1.0 is the "grown-up" version of AI orchestration. By standardizing the way agents talk to each other (A2A), discover tools (MCP), and process information (Middleware), Microsoft has provided a clear path for taking AI experiments into production. For more detailed guides, check out the official Microsoft Agent Framework DocumentationMicrosoft Agent Framework - .NET AI Community StandupBoard and executive meeting management - Pervasent - SharePoint Partner Showcase
We are excited to share a new episode on our partner showcase series focused on SharePoint in Microsoft 365. In this post, we focus on Pervasent which is providing executive and board meeting targeted solution to plan, manage and distribute the covered content. Content management and planning features in this solution are built with SharePoint Framework (SPFx).486Views1like0CommentsBuilding a modern digital workplace on Microsoft 365 - WebVine - SharePoint Partner Showcase
We are excited to share a new episode on our partner showcase series focused on SharePoint in Microsoft 365. In this post, we focus on WebVine which is providing intranet accelerator solution with streamlined document management capabilities with user interface implemented using SharePoint Framework (SPFx).657Views0likes0CommentsUnlocking hard data estates: How Cloudera on Microsoft Marketplace brings AI to regulated industries
In this guest blog post, Alex Wagman, Global Cloud Alliance Manager at Cloudera, considers the data challenges of regulated industries and how Cloudera enables governed hybrid data and AI.89Views3likes0CommentsSharePoint Video Playback Quality Defaults to 480p – Horrendous
I manage multimedia training for over 600 locations and two distribution centers, supporting nearly 10,000 team members. We have made a significant investment in producing high-quality training content through professional equipment, structured scripting, and dedicated production and editing time. However, once videos are uploaded to SharePoint, the default playback quality often degrades the experience to what appears to be approximately 480p. As a result, professionally produced content can look noticeably poor on initial playback. Many of our store-level team members are not in a position to manually adjust playback settings, so the default experience matters. The source files are high resolution, but the default playback does not reflect that quality. This also creates issues when leadership reviews training content, because the playback quality can reflect poorly on the production even though the original video is clear. We moved away from third-party hosting due to ad exposure during onboarding and the need for a controlled internal platform. We are intentionally using SharePoint and Stream as part of our existing Microsoft 365 ecosystem for scalability, governance, and centralized access. Introducing additional paid hosting platforms or external streaming solutions is not a direction we are pursuing. From an enterprise training standpoint, defaulting to low-resolution playback undermines engagement and credibility. Are there plans to allow administrators to define a default playback resolution for SharePoint or Stream videos? Are there recommended encoding settings that influence the initial playback quality more reliably? Are there roadmap updates around improving adaptive streaming behavior or default resolution selection? Any guidance or insight would be appreciated.15Views0likes0CommentsDev Containers for .NET in VS Code: A Beginner‑Friendly Guide That Actually Works
What Dev Containers are really about At a high level, Dev Containers let you use a Docker container as your development environment inside VS Code. But the real idea is not “Docker for development”. The real idea is this: Move all environment complexity out of your laptop and into version‑controlled configuration. With Dev Containers: Your laptop becomes just a VS Code client Your tools, SDKs, runtimes, and dependencies live inside the container Your project defines its own development environment, not your machine This means: You can switch projects without breaking anything You can delete and recreate your environment safely New developers get the same setup without tribal knowledge Why Dev Containers are so useful for .NET projects .NET development often looks simple at first until it doesn’t. Common pain points: Different developers using different .NET SDK versions One project needs .NET 6, another needs .NET 8 Native dependencies work on one machine but not another CI runs on Linux but developers run on Windows Dev Containers solve this by: Locking the SDK version and OS used for development Running everything in a Linux container (close to CI/production) Keeping developer machines clean and stable Making onboarding almost instant: clone → reopen in container → run Once the .devcontainer folder is committed to the repo, the environment becomes part of the codebase, not a wiki page. How Dev Containers work in VS Code You don’t need deep Docker knowledge to use Dev Containers. Here’s the mental model that helped me: Your repository contains a .devcontainer folder Inside it, devcontainer.json describes the development environment VS Code reads that file and starts a container VS Code connects to the container and runs extensions inside it Your source code stays on your machine, but: the terminal runs inside the container the debugger runs inside the container the SDKs live inside the container If something breaks, you rebuild the container, not your laptop. When Dev Containers are a great choice (and when they’re not) Dev Containers are a great fit when: You work on multiple projects with different requirements Your team struggles with environment consistency You want Linux parity for CI and containerized deployments You value reproducibility over ad‑hoc local setup They may not be ideal when: You’re working on very small throwaway scripts You rely heavily on Windows‑only tooling You cannot use Docker at all in your environment For most professional .NET teams, the benefits far outweigh the cost. Docker on Windows: a choice you must make early When starting with Dev Containers on Windows, one of the first decisions you must make is how Docker runs on your machine. Both Docker Desktop and Docker Engine inside WSL work well with Dev Containers but they serve slightly different needs. Using Docker Desktop Docker Desktop is the easiest and most beginner‑friendly way to get started with Dev Containers. Pros Very quick setup with minimal configuration Comes with a graphical dashboard for containers, images, and logs Integrates smoothly with VS Code and WSL2 Easier to troubleshoot when you’re learning Cons Uses more system resources in the background Runs additional services even when you’re not actively developing May be restricted or licensed differently in some enterprise environments When to use Docker Desktop You are new to Docker or Dev Containers You want the simplest and fastest setup You value ease of use over fine‑grained control You are working on personal projects or in environments where Docker Desktop is allowed For most developers starting out with Dev Containers, Docker Desktop is the recommended entry point. Using Docker Engine inside WSL This approach installs Docker Engine directly inside a Linux distribution (like Ubuntu) running on WSL2, without Docker Desktop. Pros Lower resource usage compared to Docker Desktop Linux‑native behavior (closer to CI and production) No dependency on Docker Desktop Often preferred in enterprise or restricted environments Cons Requires manual installation and configuration Needs basic Linux and WSL knowledge No graphical UI everything is CLI‑based When to use Docker Engine in WSL Docker Desktop is not allowed or restricted You want a leaner, Linux‑first workflow You already work mostly inside WSL You want tighter control over your Docker setup This approach is ideal once you are comfortable with Docker and WSL. Note : Do not mix Docker Desktop and Docker Engine inside WSL.Pick one approach and stick with it. Running both at the same time often leads to Docker context confusion and Dev Containers failing in unpredictable ways, even when your configuration looks correct. A performance tip that makes a huge difference If you’re using Linux containers with WSL, store your code inside the WSL filesystem. Recommended: /home/<user>/projects/your-repo Avoid: /mnt/c/Users/<user>/your-repo Linux containers accessing Windows files are slower and cause file‑watching issues. Moving the repo into WSL made my Dev Containers feel almost native. First‑time setup: the simplest way to start If you’re trying Dev Containers for the first time, follow this exact order: Install Visual Studio Code Install the Dev Containers extension Install Docker Desktop (or Docker Engine in WSL) Clone your repo inside the WSL filesystem Open the folder in VS Code Run “Dev Containers: Reopen in Container” That’s it. VS Code handles the rest. Your first .NET Dev Container (hands‑on example) Tech Stack .NET 8 Web API PostgreSQL 16 Entity Framework Core + Npgsql VS Code Dev Containers Docker Compose Project Structure my-blog-api/ ├─ .devcontainer/ │ └─ devcontainer.json ├─ docker-compose.yml └─ src/ └─ BlogApi/ ├─ Program.cs ├─ BlogApi.csproj ├─ appsettings.json ├─ Models/ └─ Data/ Step 1: Create the Web API mkdir my-blog-api cd my-blog-api mkdir src && cd src dotnet new webapi -n BlogApi cd BlogApi Step 2: Add EF Core + PostgreSQL packages dotnet add package Npgsql.EntityFrameworkCore.PostgreSQL dotnet add package Microsoft.EntityFrameworkCore.Design Step 3: Docker Compose (API + PostgreSQL) Create docker-compose.yml at the repo root: version: "3.8" services: app: image: mcr.microsoft.com/devcontainers/dotnet:1-8.0 volumes: - .:/workspace:cached working_dir: /workspace command: sleep infinity ports: - "5000:5000" depends_on: - db db: image: postgres:16 environment: POSTGRES_USER: devuser POSTGRES_PASSWORD: devpwd POSTGRES_DB: devdb ports: - "5432:5432" volumes: - pgdata:/var/lib/postgresql/data pgadmin: image: dpage/pgadmin4 environment: PGADMIN_DEFAULT_EMAIL: admin@admin.com PGADMIN_DEFAULT_PASSWORD: admin ports: - "5050:80" depends_on: - db volumes: pgdata: Step 4: Dev Container configuration Create .devcontainer/devcontainer.json: { "name": "dotnet-postgres-devcontainer", "dockerComposeFile": "../docker-compose.yml", "service": "app", "workspaceFolder": "/workspace", "shutdownAction": "stopCompose", "customizations": { "vscode": { "extensions": [ "ms-dotnettools.csdevkit", "ms-dotnettools.csharp", "ms-azuretools.vscode-docker" ] } }, "postCreateCommand": "dotnet restore" } Open the folder in VS Code and run: Dev Containers: Reopen in Container Step 5: Connection string (container‑to‑container) Update appsettings.json: { "ConnectionStrings": { "DefaultConnection": "Host=db;Port=5432;Database=devdb;Username=devuser;Password=devpwd" }, "Logging": { "LogLevel": { "Default": "Information", "Microsoft.AspNetCore": "Warning" } }, "AllowedHosts": "*" } Host=db works because Docker Compose provides internal DNS between services. Step 6: EF Core Model & DbContext Post entity – Models/Post.cs namespace BlogApi.Models; public class Post { public int Id { get; set; } public string Title { get; set; } = string.Empty; public string Content { get; set; } = string.Empty; public DateTime CreatedUtc { get; set; } = DateTime.UtcNow; } DbContext – Data/BlogDbContext.cs using BlogApi.Models; using Microsoft.EntityFrameworkCore; namespace BlogApi.Data; public class BlogDbContext : DbContext { public BlogDbContext(DbContextOptions<BlogDbContext> options) : base(options) { } public DbSet<Post> Posts => Set<Post>(); } Step 7: Program.cs (Minimal CRUD) Replace Program.cs with: using BlogApi.Data; using BlogApi.Models; using Microsoft.EntityFrameworkCore; var builder = WebApplication.CreateBuilder(args); builder.Services.AddDbContext<BlogDbContext>(options => options.UseNpgsql(builder.Configuration.GetConnectionString("DefaultConnection"))); builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); var app = builder.Build(); // Apply migrations on startup (dev-only convenience) using (var scope = app.Services.CreateScope()) { var db = scope.ServiceProvider.GetRequiredService<BlogDbContext>(); db.Database.Migrate(); } app.UseSwagger(); app.UseSwaggerUI(); app.MapGet("/posts", async (BlogDbContext db) => await db.Posts.OrderByDescending(p => p.CreatedUtc).ToListAsync()); app.MapPost("/posts", async (Post post, BlogDbContext db) => { db.Posts.Add(post); await db.SaveChangesAsync(); return Results.Created($"/posts/{post.Id}", post); }); app.MapPut("/posts/{id:int}", async (int id, Post input, BlogDbContext db) => { var post = await db.Posts.FindAsync(id); if (post is null) return Results.NotFound(); post.Title = input.Title; post.Content = input.Content; await db.SaveChangesAsync(); return Results.Ok(post); }); app.MapDelete("/posts/{id:int}", async (int id, BlogDbContext db) => { var post = await db.Posts.FindAsync(id); if (post is null) return Results.NotFound(); db.Posts.Remove(post); await db.SaveChangesAsync(); return Results.NoContent(); }); app.Run("http://0.0.0.0:5000"); Step 8: Run migrations (inside Dev Container) cd src/BlogApi dotnet tool install --global dotnet-ef export PATH="$PATH:/home/vscode/.dotnet/tools" dotnet ef migrations add InitialCreate dotnet ef database update Step 9: Run the API dotnet run 🔗 Open: Swagger → http://localhost:5000/swagger Posts API → http://localhost:5000/posts Common mistakes and quick fixes Mistake Symptom Fix Mixing Docker models Random failures Use only one Docker approach Code under /mnt/c Slow builds Move repo to WSL filesystem Docker not running Container won’t start Check docker info Pruning first Issues return Fix daemon/context first Common Challenges Faced Multiple Docker engines active simultaneously Docker Desktop and Docker Engine inside WSL were both present, causing conflicts. Unstable Docker CLI context Docker CLI intermittently pointed to different or broken Docker endpoints. Docker daemon appeared running but was unusable Docker commands failed with API errors despite the daemon seeming active. systemd dependency issues inside WSL Docker Engine depended on systemd, which was not consistently active after WSL restarts. Dev Containers failing during setup VS Code Dev Containers surfaced failures during feature installation and builds. Misleading Docker error messages Errors pointed to API or version issues, masking the real root cause. Cache cleanup ineffective Pruning images and containers did not resolve underlying daemon issues. Container observability confusion PostgreSQL and pgAdmin worked, but container health, volumes, and data locations were unclear. Solutions & Maintainable Settings Enforce a Single Docker Model. Use either Docker Desktop or native Docker Engine inside WSL .never both. docker version docker info Verify only one server is shown No references to dockerDesktopLinuxEngine when using native WSL Docker Explicitly Lock Docker CLI Context. Always verify and set Docker context before running Compose or Dev Containers. docker context ls docker context show docker context use default Context must point to the intended daemon (WSL or Desktop) Validate Docker Daemon Health Before Project Start.Confirm Docker is reachable before Dev Containers or Compose. docker info docker ps Must return without API / 500 / version errors Do not proceed if these fail Ensure systemd Is Enabled in WSL.Docker Engine inside WSL depends on systemd. cat /etc/wsl.conf Expected: [boot] systemd=true If not then apply changes.Restart WSL and re‑verify: wsl --shutdown systemctl status docker Start Docker Explicitly After WSL Restart.WSL restarts silently stop services. sudo systemctl start docker sudo systemctl enable docker Verify: docker ps Use Only WSL Native Filesystem for Projects.Keep project under /home/<user>/... Avoid /mnt/c/... paths. Path starts with /home/ Treat Dev Containers as a Consumer, Not the Fix. Fix Docker issues outside Dev Containers first. Pre‑check Commands docker compose config docker compose up -d Compose works before opening Dev Container Keep Dev Container Features Minimal on First Run Start with base image + required services only Add features after baseline stability docker images docker ps Verify Container Observability Explicitly.Confirm containers are healthy, ports mapped, volumes mounted. docker ps docker inspect <container_name> docker logs <container_name> Port check: ss -lntp | grep <port> Avoid Cache Cleanup as a First Fix. Do not rely on prune to fix daemon issues Only After Daemon Is Healthy docker system prune -f docker volume prune -f Establish a “Known‑Good” Baseline Checklist. Validate sequence before development starts Baseline Flow wsl --shutdown # reopen WSL sudo systemctl start docker docker context show docker info docker compose up -d code . # Only then open Reopen in Container If something breaks in between after running dev container we can stop and clean the containers and rebuild it again docker ps docker stop $(docker ps -q) docker rm -f $(docker ps -aq) docker ps Closing Thoughts Dev Containers shift local development from fragile, machine‑specific setups to reproducible, version‑controlled environments. With Dev Containers, Docker Compose, PostgreSQL, and pgAdmin, your entire .NET development stack lives inside containers.not on your laptop. SDKs, databases, and tools are isolated, consistent, and easy to rebuild. When something breaks, you rebuild containers.not machines. This approach removes onboarding friction, improves Linux parity with CI, and eliminates the classic “works on my machine” problem. Once Docker is stable, Dev Containers become one of the most reliable ways to build modern .NET applications. Key Takeaways Dev Containers treat the development environment as code .NET, PostgreSQL, and pgAdmin run fully isolated in containers pgAdmin provides clear visibility into database state and migrations Docker stability is a prerequisite.Dev Containers are not a Docker fix Onboarding becomes simple: clone → reopen in container → run Rebuild containers, not laptops