foundry local
2 TopicsEdge AI for Student Developers: Learn to Run AI Locally
AI isn’t just for the cloud anymore. With the rise of Small Language Models (SLMs) and powerful local inference tools, developers can now run intelligent applications directly on laptops, phones, and edge devices—no internet required. If you're a student developer curious about building AI that works offline, privately, and fast, Microsoft’s Edge AI for Beginners course is your perfect starting point. What Is Edge AI? Edge AI refers to running AI models directly on local hardware—like your laptop, mobile device, or embedded system—without relying on cloud servers. This approach offers: ⚡ Real-time performance 🔒 Enhanced privacy (no data leaves your device) 🌐 Offline functionality 💸 Reduced cloud costs Whether you're building a chatbot that works without Wi-Fi or optimizing AI for low-power devices, Edge AI is the future of intelligent, responsive apps. About the Course Edge AI for Beginners is a free, open-source curriculum designed to help you: Understand the fundamentals of Edge AI and local inference Explore Small Language Models like Phi-2, Mistral-7B, and Gemma Deploy models using tools like Llama.cpp, Olive, MLX, and OpenVINO Build cross-platform apps that run AI locally on Windows, macOS, Linux, and mobile The course is hosted on GitHub and includes hands-on labs, quizzes, and real-world examples. You can fork it, remix it, and contribute to the community. What You’ll Learn Module Focus 01. Introduction What is Edge AI and why it matters 02. SLMs Overview of small language models 03. Deployment Running models locally with various tools 04. Optimization Speeding up inference and reducing memory 05. Applications Building real-world Edge AI apps Each module is beginner-friendly and includes practical exercises to help you build and deploy your own local AI solutions. Who Should Join? Student developers curious about AI beyond the cloud Hackathon participants looking to build offline-capable apps Makers and builders interested in privacy-first AI Anyone who wants to explore the future of on-device intelligence No prior AI experience required just a willingness to learn and experiment. Why It Matters Edge AI is a game-changer for developers. It enables smarter, faster, and more private applications that work anywhere. By learning how to deploy AI locally, you’ll gain skills that are increasingly in demand across industries—from healthcare to robotics to consumer tech. Plus, the course is: 💯 Free and open-source 🧠 Backed by Microsoft’s best practices 🧪 Hands-on and project-based 🌐 Continuously updated Ready to Start? Head to aka.ms/edgeai-for-beginners and dive into the modules. Whether you're coding in your dorm room or presenting at your next hackathon, this course will help you build smarter AI apps that run right where you need them on the edge.35Views0likes0CommentsInstall and run Azure Foundry Local LLM server & Open WebUI on Windows Server 2025
Foundry Local is an on-device AI inference solution offering performance, privacy, customization, and cost advantages. It integrates seamlessly into your existing workflows and applications through an intuitive CLI, SDK, and REST API. Foundry Local has the following benefits: On-Device Inference: Run models locally on your own hardware, reducing your costs while keeping all your data on your device. Model Customization: Select from preset models or use your own to meet specific requirements and use cases. Cost Efficiency: Eliminate recurring cloud service costs by using your existing hardware, making AI more accessible. Seamless Integration: Connect with your applications through an SDK, API endpoints, or the CLI, with easy scaling to Azure AI Foundry as your needs grow. Foundry Local is ideal for scenarios where: You want to keep sensitive data on your device. You need to operate in environments with limited or no internet connectivity. You want to reduce cloud inference costs. You need low-latency AI responses for real-time applications. You want to experiment with AI models before deploying to a cloud environment. You can install Foundry Local by running the following command: winget install Microsoft.FoundryLocal Once Foundry Local is installed, you download and interact with a model from the command line by using a command like: foundry model run phi-4 This will download the phi-4 model and provide a text based chat interface. If you want to interact with Foundry Local through a web chat interface, you can use the open source Open WebUI project. You can install Open WebUI on Windows Server by performing the following steps: Download OpenWebUIInstaller.exe from https://github.com/BrainDriveAI/OpenWebUI_CondaInstaller/releases. You'll get warning messages from Windows Defender SmartScreen. Copy OpenWebUIInstaller.exe into C:\Temp. In an elevated command prompt, run the following commands winget install -e --id Anaconda.Miniconda3 --scope machine $env:Path = 'C:\ProgramData\miniconda3;' + $env:Path $env:Path = 'C:\ProgramData\miniconda3\Scripts;' + $env:Path $env:Path = 'C:\ProgramData\miniconda3\Library\bin;' + $env:Path conda.exe tos accept --override-channels --channel https://repo.anaconda.com/pkgs/main conda.exe tos accept --override-channels --channel https://repo.anaconda.com/pkgs/r conda.exe tos accept --override-channels --channel https://repo.anaconda.com/pkgs/msys2 C:\Temp\OpenWebUIInstaller.exe Then from the dialog choose to install and run Open WebUI. You then need to take several extra steps to configure Open WebUI to connect to the Foundry Local endpoint. Enable Direct Connections in Open WebUI Select Settings and Admin Settings in the profile menu. Select Connections in the navigation menu. Enable Direct Connections by turning on the toggle. This allows users to connect to their own OpenAI compatible API endpoints. Connect Open WebUI to Foundry Local: Select Settings in the profile menu. Select Connections in the navigation menu. Select + by Manage Direct Connections. For the URL, enter http://localhost:PORT/v1 where PORT is the Foundry Local endpoint port (use the CLI command foundry service status to find it). Note that Foundry Local dynamically assigns a port, so it isn't always the same. For the Auth, select None. Select Save ➡️ What is Foundry Local https://learn.microsoft.com/en-us/azure/ai-foundry/foundry-local/what-is-foundry-local ➡️ Open WebUI: https://docs.openwebui.com/124Views0likes0Comments