Blog Post

Microsoft Developer Community Blog
9 MIN READ

Dev Containers for .NET in VS Code: A Beginner‑Friendly Guide That Actually Works

Debapriya's avatar
Debapriya
Icon for Microsoft rankMicrosoft
Apr 25, 2026

Setting up a local development environment should not be the hardest part of building software, yet for many developers, it often is. Different .NET SDK versions, missing tools, OS‑specific dependencies, and database setup issues frequently lead to the familiar frustration of “it works on my machine”. Dev Containers aim to solve this problem by changing how we think about development environments. Instead of installing and maintaining tools directly on your laptop, Dev Containers allow you to define your entire development environment as code and run it inside a container — seamlessly integrated with Visual Studio Code. This means every developer on a team can work in the same, consistent environment, regardless of their operating system or local setup. Onboarding becomes faster, environment drift disappears, and development feels predictable again. In this article, we’ll explore what Dev Containers are, why they are especially useful for .NET developers, and how to start using them step by step. We’ll also look at the two common ways to run Dev Containers on Windows. 1.Docker Desktop 2.Docker Engine inside WSL. and help you understand when to use each one, along with their pros and cons. Whether you’re completely new to Dev Containers or looking to adopt them more confidently in real projects, this guide will help you get there with clarity and practical examples.

What Dev Containers are really about

At a high level, Dev Containers let you use a Docker container as your development environment inside VS Code.

But the real idea is not “Docker for development”.

The real idea is this: Move all environment complexity out of your laptop and into version‑controlled configuration.

With Dev Containers:

  • Your laptop becomes just a VS Code client
  • Your tools, SDKs, runtimes, and dependencies live inside the container
  • Your project defines its own development environment, not your machine

This means:

  • You can switch projects without breaking anything
  • You can delete and recreate your environment safely
  • New developers get the same setup without tribal knowledge

Why Dev Containers are so useful for .NET projects

.NET development often looks simple at first until it doesn’t.

Common pain points:

  • Different developers using different .NET SDK versions
  • One project needs .NET 6, another needs .NET 8
  • Native dependencies work on one machine but not another
  • CI runs on Linux but developers run on Windows

Dev Containers solve this by:

  • Locking the SDK version and OS used for development
  • Running everything in a Linux container (close to CI/production)
  • Keeping developer machines clean and stable
  • Making onboarding almost instant: clone → reopen in container → run

Once the .devcontainer folder is committed to the repo, the environment becomes part of the codebase, not a wiki page.

How Dev Containers work in VS Code

You don’t need deep Docker knowledge to use Dev Containers.

Here’s the mental model that helped me:

  1. Your repository contains a .devcontainer folder
  2. Inside it, devcontainer.json describes the development environment
  3. VS Code reads that file and starts a container
  4. VS Code connects to the container and runs extensions inside it

Your source code stays on your machine, but:

  • the terminal runs inside the container
  • the debugger runs inside the container
  • the SDKs live inside the container

If something breaks, you rebuild the container, not your laptop.

When Dev Containers are a great choice (and when they’re not)

Dev Containers are a great fit when:

  • You work on multiple projects with different requirements
  • Your team struggles with environment consistency
  • You want Linux parity for CI and containerized deployments
  • You value reproducibility over ad‑hoc local setup

They may not be ideal when:

  • You’re working on very small throwaway scripts
  • You rely heavily on Windows‑only tooling
  • You cannot use Docker at all in your environment

For most professional .NET teams, the benefits far outweigh the cost.

Docker on Windows: a choice you must make early

When starting with Dev Containers on Windows, one of the first decisions you must make is how Docker runs on your machine. Both Docker Desktop and Docker Engine inside WSL work well with Dev Containers but they serve slightly different needs.

Using Docker Desktop

Docker Desktop is the easiest and most beginner‑friendly way to get started with Dev Containers.

Pros

  • Very quick setup with minimal configuration
  • Comes with a graphical dashboard for containers, images, and logs
  • Integrates smoothly with VS Code and WSL2
  • Easier to troubleshoot when you’re learning

Cons

  • Uses more system resources in the background
  • Runs additional services even when you’re not actively developing
  • May be restricted or licensed differently in some enterprise environments

When to use Docker Desktop

  • You are new to Docker or Dev Containers
  • You want the simplest and fastest setup
  • You value ease of use over fine‑grained control
  • You are working on personal projects or in environments where Docker Desktop is allowed

For most developers starting out with Dev Containers, Docker Desktop is the recommended entry point.

Using Docker Engine inside WSL

This approach installs Docker Engine directly inside a Linux distribution (like Ubuntu) running on WSL2, without Docker Desktop.

Pros

  • Lower resource usage compared to Docker Desktop
  • Linux‑native behavior (closer to CI and production)
  • No dependency on Docker Desktop
  • Often preferred in enterprise or restricted environments

Cons

  • Requires manual installation and configuration
  • Needs basic Linux and WSL knowledge
  • No graphical UI everything is CLI‑based

When to use Docker Engine in WSL

  • Docker Desktop is not allowed or restricted
  • You want a leaner, Linux‑first workflow
  • You already work mostly inside WSL
  • You want tighter control over your Docker setup

This approach is ideal once you are comfortable with Docker and WSL.

Note : Do not mix Docker Desktop and Docker Engine inside WSL.Pick one approach and stick with it.

Running both at the same time often leads to Docker context confusion and Dev Containers failing in unpredictable ways, even when your configuration looks correct.

A performance tip that makes a huge difference

If you’re using Linux containers with WSL, store your code inside the WSL filesystem.

Recommended: /home/<user>/projects/your-repo

Avoid: /mnt/c/Users/<user>/your-repo

Linux containers accessing Windows files are slower and cause file‑watching issues. Moving the repo into WSL made my Dev Containers feel almost native.

First‑time setup: the simplest way to start

If you’re trying Dev Containers for the first time, follow this exact order:

  1. Install Visual Studio Code
  2. Install the Dev Containers extension
  3. Install Docker Desktop (or Docker Engine in WSL)
  4. Clone your repo inside the WSL filesystem
  5. Open the folder in VS Code
  6. Run “Dev Containers: Reopen in Container”

That’s it. VS Code handles the rest.

Your first .NET Dev Container (hands‑on example)

Tech Stack

  • .NET 8 Web API
  • PostgreSQL 16
  • Entity Framework Core + Npgsql
  • VS Code Dev Containers
  • Docker Compose
Project Structure
my-blog-api/
├─ .devcontainer/
│   └─ devcontainer.json
├─ docker-compose.yml
└─ src/
    └─ BlogApi/
        ├─ Program.cs
        ├─ BlogApi.csproj
        ├─ appsettings.json
        ├─ Models/
        └─ Data/

Step 1: Create the Web API

mkdir my-blog-api 
cd my-blog-api 
mkdir src && cd src 
dotnet new webapi -n BlogApi 
cd BlogApi

Step 2: Add EF Core + PostgreSQL packages

dotnet add package Npgsql.EntityFrameworkCore.PostgreSQL
dotnet add package Microsoft.EntityFrameworkCore.Design

Step 3: Docker Compose (API + PostgreSQL)

Create docker-compose.yml at the repo root:

version: "3.8"

services:
  app:
    image: mcr.microsoft.com/devcontainers/dotnet:1-8.0
    volumes:
      - .:/workspace:cached
    working_dir: /workspace
    command: sleep infinity
    ports:
      - "5000:5000"
    depends_on:
      - db

  db:
    image: postgres:16
    environment:
      POSTGRES_USER: devuser
      POSTGRES_PASSWORD: devpwd
      POSTGRES_DB: devdb
    ports:
      - "5432:5432"
    volumes:
      - pgdata:/var/lib/postgresql/data

pgadmin:
    image: dpage/pgadmin4
    environment:
      PGADMIN_DEFAULT_EMAIL: admin@admin.com
      PGADMIN_DEFAULT_PASSWORD: admin
    ports:
      - "5050:80"
    depends_on:
      - db

volumes:
  pgdata:

Step 4: Dev Container configuration

Create .devcontainer/devcontainer.json:

{
  "name": "dotnet-postgres-devcontainer",
  "dockerComposeFile": "../docker-compose.yml",
  "service": "app",
  "workspaceFolder": "/workspace",
  "shutdownAction": "stopCompose",
  "customizations": {
    "vscode": {
      "extensions": [
        "ms-dotnettools.csdevkit",
        "ms-dotnettools.csharp",
        "ms-azuretools.vscode-docker"
      ]
    }
  },
  "postCreateCommand": "dotnet restore"
}

Open the folder in VS Code and run:

Dev Containers: Reopen in Container

Step 5: Connection string (container‑to‑container)

Update appsettings.json:

{
  "ConnectionStrings": {
    "DefaultConnection": "Host=db;Port=5432;Database=devdb;Username=devuser;Password=devpwd"
  },
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft.AspNetCore": "Warning"
    }
  },
  "AllowedHosts": "*"
}

Host=db works because Docker Compose provides internal DNS between services.

Step 6: EF Core Model & DbContext

Post entity – Models/Post.cs

namespace BlogApi.Models;

public class Post
{
    public int Id { get; set; }
    public string Title { get; set; } = string.Empty;
    public string Content { get; set; } = string.Empty;
    public DateTime CreatedUtc { get; set; } = DateTime.UtcNow;
}

DbContext – Data/BlogDbContext.cs

using BlogApi.Models;
using Microsoft.EntityFrameworkCore;

namespace BlogApi.Data;

public class BlogDbContext : DbContext
{
    public BlogDbContext(DbContextOptions<BlogDbContext> options) : base(options) { }

    public DbSet<Post> Posts => Set<Post>();
}

Step 7: Program.cs (Minimal CRUD)

Replace Program.cs with:

using BlogApi.Data;
using BlogApi.Models;
using Microsoft.EntityFrameworkCore;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddDbContext<BlogDbContext>(options =>
    options.UseNpgsql(builder.Configuration.GetConnectionString("DefaultConnection")));

builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();

var app = builder.Build();

// Apply migrations on startup (dev-only convenience)
using (var scope = app.Services.CreateScope())
{
    var db = scope.ServiceProvider.GetRequiredService<BlogDbContext>();
    db.Database.Migrate();
}

app.UseSwagger();
app.UseSwaggerUI();

app.MapGet("/posts", async (BlogDbContext db) =>
    await db.Posts.OrderByDescending(p => p.CreatedUtc).ToListAsync());

app.MapPost("/posts", async (Post post, BlogDbContext db) =>
{
    db.Posts.Add(post);
    await db.SaveChangesAsync();
    return Results.Created($"/posts/{post.Id}", post);
});

app.MapPut("/posts/{id:int}", async (int id, Post input, BlogDbContext db) =>
{
    var post = await db.Posts.FindAsync(id);
    if (post is null) return Results.NotFound();

    post.Title = input.Title;
    post.Content = input.Content;
    await db.SaveChangesAsync();

    return Results.Ok(post);
});

app.MapDelete("/posts/{id:int}", async (int id, BlogDbContext db) =>
{
    var post = await db.Posts.FindAsync(id);
    if (post is null) return Results.NotFound();

    db.Posts.Remove(post);
    await db.SaveChangesAsync();
    return Results.NoContent();
});

app.Run("http://0.0.0.0:5000");

Step 8: Run migrations (inside Dev Container)

cd src/BlogApi
dotnet tool install --global dotnet-ef
export PATH="$PATH:/home/vscode/.dotnet/tools"

dotnet ef migrations add InitialCreate
dotnet ef database update

Step 9: Run the API

dotnet run

🔗 Open:

Common mistakes and quick fixes

Mistake

Symptom

Fix

Mixing Docker models

Random failures

Use only one Docker approach

Code under /mnt/c

Slow builds

Move repo to WSL filesystem

Docker not running

Container won’t start

Check docker info

Pruning first

Issues return

Fix daemon/context first

Common Challenges Faced

  • Multiple Docker engines active simultaneously
    Docker Desktop and Docker Engine inside WSL were both present, causing conflicts.
  • Unstable Docker CLI context
    Docker CLI intermittently pointed to different or broken Docker endpoints.
  • Docker daemon appeared running but was unusable
    Docker commands failed with API errors despite the daemon seeming active.
  • systemd dependency issues inside WSL
    Docker Engine depended on systemd, which was not consistently active after WSL restarts.
  • Dev Containers failing during setup
    VS Code Dev Containers surfaced failures during feature installation and builds.
  • Misleading Docker error messages
    Errors pointed to API or version issues, masking the real root cause.
  • Cache cleanup ineffective
    Pruning images and containers did not resolve underlying daemon issues.
  • Container observability confusion
    PostgreSQL and pgAdmin worked, but container health, volumes, and data locations were unclear.

Solutions & Maintainable Settings

  • Enforce a Single Docker Model. Use either Docker Desktop or native Docker Engine inside WSL .never both.
docker version
docker info

Verify only one server is shown

No references to dockerDesktopLinuxEngine when using native WSL Docker

  • Explicitly Lock Docker CLI Context. Always verify and set Docker context before running Compose or Dev Containers.
docker context ls
docker context show
docker context use default

Context must point to the intended daemon (WSL or Desktop)

  • Validate Docker Daemon Health Before Project Start.Confirm Docker is reachable before Dev Containers or Compose.
docker info
docker ps

Must return without API / 500 / version errors

Do not proceed if these fail

  • Ensure systemd Is Enabled in WSL.Docker Engine inside WSL depends on systemd.
cat /etc/wsl.conf

Expected:
[boot]
systemd=true
  • If not then apply changes.Restart WSL and re‑verify:
wsl --shutdown
systemctl status docker
  • Start Docker Explicitly After WSL Restart.WSL restarts silently stop services.
sudo systemctl start docker
sudo systemctl enable docker

Verify:

docker ps
  • Use Only WSL Native Filesystem for Projects.Keep project under /home/<user>/...
  • Avoid /mnt/c/... paths. Path starts with /home/

  • Treat Dev Containers as a Consumer, Not the Fix. Fix Docker issues outside Dev Containers first.

Pre‑check Commands

docker compose config
docker compose up -d

Compose works before opening Dev Container

  • Keep Dev Container Features Minimal on First Run
  • Start with base image + required services only
  • Add features after baseline stability
docker images
docker ps
  • Verify Container Observability Explicitly.Confirm containers are healthy, ports mapped, volumes mounted.
docker ps
docker inspect <container_name>
docker logs <container_name>
Port check:
ss -lntp | grep <port>
  • Avoid Cache Cleanup as a First Fix. Do not rely on prune to fix daemon issues
  • Only After Daemon Is Healthy
docker system prune -f
docker volume prune -f
  • Establish a “Known‑Good” Baseline Checklist. Validate sequence before development starts
  • Baseline Flow
wsl --shutdown

# reopen WSL

sudo systemctl start docker
docker context show
docker info
docker compose up -d
code .

# Only then open Reopen in Container

  • If something breaks in between after running dev container we can stop and clean the containers and rebuild it again
docker ps
docker stop $(docker ps -q)
docker rm -f $(docker ps -aq)
docker ps

Closing Thoughts

Dev Containers shift local development from fragile, machine‑specific setups to reproducible, version‑controlled environments.

With Dev Containers, Docker Compose, PostgreSQL, and pgAdmin, your entire .NET development stack lives inside containers.not on your laptop. SDKs, databases, and tools are isolated, consistent, and easy to rebuild.

When something breaks, you rebuild containers.not machines.

This approach removes onboarding friction, improves Linux parity with CI, and eliminates the classic “works on my machine” problem. Once Docker is stable, Dev Containers become one of the most reliable ways to build modern .NET applications.

Key Takeaways

  • Dev Containers treat the development environment as code
  • .NET, PostgreSQL, and pgAdmin run fully isolated in containers
  • pgAdmin provides clear visibility into database state and migrations
  • Docker stability is a prerequisite.Dev Containers are not a Docker fix
  • Onboarding becomes simple: clone → reopen in container → run
  • Rebuild containers, not laptops
Published Apr 25, 2026
Version 1.0
No CommentsBe the first to comment