Blog Post

Educator Developer Blog
3 MIN READ

Edge AI for Beginners : Getting Started with Foundry Local

Sharda_Kaur's avatar
Sharda_Kaur
Iron Contributor
Oct 30, 2025

Edge AI lets you run AI models directly on your device instead of depending on cloud servers. This means faster response times, full privacy, and no internet required.

In Module 08 of the EdgeAI for Beginners course,  Microsoft introduces Foundry Local a toolkit that helps you deploy and test Small Language Models (SLMs) completely offline. In this blog, I’ll share how I installed Foundry Local, ran the Phi-3.5-mini model on my windows laptop, and what I learned through the process.

What Is Foundry Local?

Foundry Local allows developers to run AI models locally on their own hardware. It supports text generation, summarization, and code completion — all without sending data to the cloud. Unlike cloud-based systems, everything happens on your computer, so your data never leaves your device.

Prerequisites

Before starting, make sure you have:

  • Windows 10 or 11
  • Python 3.10 or newer
  • Git
  • Internet connection (for the first-time model download)
  • Foundry Local installed
Step 1 — Verify Installation

After installing Foundry Local, open Command Prompt and type:

foundry --version

 

 

If you see a version number, Foundry Local is installed correctly.

Step 2 — Start the Service

Start the Foundry Local service using:

foundry service start

 

 

You should see a confirmation message that the service is running.

Step 3 — List Available Models

To view the models supported by your system, run:

foundry model list

You’ll get a list of locally available SLMs. Here’s what I saw on my machine:

 

 

Note: Model availability depends on your device’s hardware.
For most laptops, phi-3.5-mini works smoothly on CPU.

Step 4 — Run the Phi-3.5 Model

Now let’s start chatting with the model:

 

 

foundry model run phi-3.5-mini-instruct-generic-cpu:1

Once it loads, you’ll enter an interactive chat mode.

Try a simple prompt: Hello! What can you do?

 

The model replies instantly — right from your laptop, no cloud needed.

To exit, type:

/exit

How It Works

Foundry Local loads the model weights from your device and performs inference locally.This means text generation happens using your CPU (or GPU, if available). The result: complete privacy, no internet dependency, and instant responses.

Benefits for Students

For students beginning their journey in AI, Foundry Local offers several key advantages:

  • No need for high-end GPUs or expensive cloud subscriptions.
  • Easy setup for experimenting with multiple models.
  • Perfect for class assignments, AI workshops, and offline learning sessions.
  • Promotes a deeper understanding of model behavior by allowing step-by-step local interaction.

These factors make Foundry Local a practical choice for learning environments, especially in universities and research institutions where accessibility and affordability are important.

Why Use Foundry Local

Running models locally offers several practical benefits compared to using AI Foundry in the cloud.

With Foundry Local, you do not need an internet connection, and all computations happen on your personal machine. This makes it faster for small models and more private since your data never leaves your device. In contrast, AI Foundry runs entirely on the cloud, requiring internet access and charging based on usage.

For students and developers, Foundry Local is ideal for quick experiments, offline testing, and understanding how models behave in real-time. On the other hand, AI Foundry is better suited for large-scale or production-level scenarios where models need to be deployed at scale.

In summary, Foundry Local provides a flexible and affordable environment for hands-on learning, especially when working with smaller models such as Phi-3, Qwen2.5, or TinyLlama. It allows you to experiment freely, learn efficiently, and better understand the fundamentals of Edge AI development.

Optional: Restart Later

Next time you open your laptop, you don’t have to reinstall anything.
Just run these two commands again:

foundry service start 
foundry model run phi-3.5-mini-instruct-generic-cpu:1

What I Learned

Following the EdgeAI for Beginners Study Guide helped me understand:

  • How edge AI applications work
  • How small models like Phi 3.5 can run on a local machine
  • How to test prompts and build chat apps with zero cloud usage

Conclusion

Running the Phi-3.5-mini model locally with Foundry Localgave me hands-on insight into edge AI. It’s an easy, private, and cost-free way to explore generative AI development. If you’re new to Edge AI, start with the EdgeAI for Beginners course and follow its Study Guide to get comfortable with local inference and small language models.

Resources:

 

Updated Oct 29, 2025
Version 1.0
No CommentsBe the first to comment