slm
23 TopicsFrom Cloud to Chip: Building Smarter AI at the Edge with Windows AI PCs
As AI engineers, we’ve spent years optimizing models for the cloud, scaling inference, wrangling latency, and chasing compute across clusters. But the frontier is shifting. With the rise of Windows AI PCs and powerful local accelerators, the edge is no longer a constraint it’s now a canvas. Whether you're deploying vision models to industrial cameras, optimizing speech interfaces for offline assistants, or building privacy-preserving apps for healthcare, Edge AI is where real-world intelligence meets real-time performance. Why Edge AI, Why Now? Edge AI isn’t just about running models locally, it’s about rethinking the entire lifecycle: - Latency: Decisions in milliseconds, not round-trips to the cloud. - Privacy: Sensitive data stays on-device, enabling HIPAA/GDPR compliance. - Resilience: Offline-first apps that don’t break when the network does. - Cost: Reduced cloud compute and bandwidth overhead. With Windows AI PCs powered by Intel and Qualcomm NPUs and tools like ONNX Runtime, DirectML, and Olive, developers can now optimize and deploy models with unprecedented efficiency. What You’ll Learn in Edge AI for Beginners The Edge AI for Beginners curriculum is a hands-on, open-source guide designed for engineers ready to move from theory to deployment. Multi-Language Support This content is available in over 48 languages, so you can read and study in your native language. What You'll Master This course takes you from fundamental concepts to production-ready implementations, covering: Small Language Models (SLMs) optimized for edge deployment Hardware-aware optimization across diverse platforms Real-time inference with privacy-preserving capabilities Production deployment strategies for enterprise applications Why EdgeAI Matters Edge AI represents a paradigm shift that addresses critical modern challenges: Privacy & Security: Process sensitive data locally without cloud exposure Real-time Performance: Eliminate network latency for time-critical applications Cost Efficiency: Reduce bandwidth and cloud computing expenses Resilient Operations: Maintain functionality during network outages Regulatory Compliance: Meet data sovereignty requirements Edge AI Edge AI refers to running AI algorithms and language models locally on hardware, close to where data is generated without relying on cloud resources for inference. It reduces latency, enhances privacy, and enables real-time decision-making. Core Principles: On-device inference: AI models run on edge devices (phones, routers, microcontrollers, industrial PCs) Offline capability: Functions without persistent internet connectivity Low latency: Immediate responses suited for real-time systems Data sovereignty: Keeps sensitive data local, improving security and compliance Small Language Models (SLMs) SLMs like Phi-4, Mistral-7B, Qwen and Gemma are optimized versions of larger LLMs, trained or distilled for: Reduced memory footprint: Efficient use of limited edge device memory Lower compute demand: Optimized for CPU and edge GPU performance Faster startup times: Quick initialization for responsive applications They unlock powerful NLP capabilities while meeting the constraints of: Embedded systems: IoT devices and industrial controllers Mobile devices: Smartphones and tablets with offline capabilities IoT Devices: Sensors and smart devices with limited resources Edge servers: Local processing units with limited GPU resources Personal Computers: Desktop and laptop deployment scenarios Course Modules & Navigation Course duration. 10 hours of content Module Topic Focus Area Key Content Level Duration 📖 00 Introduction to EdgeAI Foundation & Context EdgeAI Overview • Industry Applications • SLM Introduction • Learning Objectives Beginner 1-2 hrs 📚 01 EdgeAI Fundamentals Cloud vs Edge AI comparison EdgeAI Fundamentals • Real World Case Studies • Implementation Guide • Edge Deployment Beginner 3-4 hrs 🧠 02 SLM Model Foundations Model families & architecture Phi Family • Qwen Family • Gemma Family • BitNET • μModel • Phi-Silica Beginner 4-5 hrs 🚀 03 SLM Deployment Practice Local & cloud deployment Advanced Learning • Local Environment • Cloud Deployment Intermediate 4-5 hrs ⚙️ 04 Model Optimization Toolkit Cross-platform optimization Introduction • Llama.cpp • Microsoft Olive • OpenVINO • Apple MLX • Workflow Synthesis Intermediate 5-6 hrs 🔧 05 SLMOps Production Production operations SLMOps Introduction • Model Distillation • Fine-tuning • Production Deployment Advanced 5-6 hrs 🤖 06 AI Agents & Function Calling Agent frameworks & MCP Agent Introduction • Function Calling • Model Context Protocol Advanced 4-5 hrs 💻 07 Platform Implementation Cross-platform samples AI Toolkit • Foundry Local • Windows Development Advanced 3-4 hrs 🏭 08 Foundry Local Toolkit Production-ready samples Sample applications (see details below) Expert 8-10 hrs Each module includes Jupyter notebooks, code samples, and deployment walkthroughs, perfect for engineers who learn by doing. Developer Highlights - 🔧 Olive: Microsoft's optimization toolchain for quantization, pruning, and acceleration. - 🧩 ONNX Runtime: Cross-platform inference engine with support for CPU, GPU, and NPU. - 🎮 DirectML: GPU-accelerated ML API for Windows, ideal for gaming and real-time apps. - 🖥️ Windows AI PCs: Devices with built-in NPUs for low-power, high-performance inference. Local AI: Beyond the Edge Local AI isn’t just about inference, it’s about autonomy. Imagine agents that: - Learn from local context - Adapt to user behavior - Respect privacy by design With tools like Agent Framework, Azure AI Foundry and Windows Copilot Studio, and Foundry Local developers can orchestrate local agents that blend LLMs, sensors, and user preferences, all without cloud dependency. Try It Yourself Ready to get started? Clone the Edge AI for Beginners GitHub repo, run the notebooks, and deploy your first model to a Windows AI PC or IoT devices Whether you're building smart kiosks, offline assistants, or industrial monitors, this curriculum gives you the scaffolding to go from prototype to production.Essential Microsoft Resources for MVPs & the Tech Community from the AI Tour
Unlock the power of Microsoft AI with redeliverable technical presentations, hands-on workshops, and open-source curriculum from the Microsoft AI Tour! Whether you’re a Microsoft MVP, Developer, or IT Professional, these expertly crafted resources empower you to teach, train, and lead AI adoption in your community. Explore top breakout sessions covering GitHub Copilot, Azure AI, Generative AI, and security best practices—designed to simplify AI integration and accelerate digital transformation. Dive into interactive workshops that provide real-world applications of AI technologies. Take it a step further with Microsoft’s Open-Source AI Curriculum, offering beginner-friendly courses on AI, Machine Learning, Data Science, Cybersecurity, and GitHub Copilot—perfect for upskilling teams and fostering innovation. Don’t just learn—lead. Access these resources, host impactful training sessions, and drive AI adoption in your organization. Start sharing today! Explore now: Microsoft AI Tour Resources.Join Us for a Technical Deep Dive and Q&A on Foundry Local - LLMs on device
Join us for an Ask Me Anything with the Foundry Local team on October 14th, 2025! Discover how Foundry Local is redefining edge AI with powerful features like on-device inference, enabling you to run models directly on your hardware, cutting costs and keeping your data secure. Whether you're customizing models to fit unique use cases or integrating seamlessly via SDKs, APIs, or CLI, Foundry Local offers scalable pathways to Azure AI Foundry as your needs evolve. It's the perfect solution for environments with limited connectivity, sensitive data requirements, low-latency demands, or early-stage experimentation before cloud deployment. If you're building smarter, leaner, and more private AI workflows, this AMA is your chance to dive deep with the team behind it all. What is Foundry Local? Foundry Local is a set of development tools designed to help you build and evaluate LLM applications on your local machine. It provides a curated collection of production-quality tools, including evaluation and prompt engineering capabilities, that are fully compatible with Azure AI. This allows for a seamless transition of your work from your local environment to the cloud. Don't miss this opportunity to connect with our experts and enhance your understanding of local LLM development. Foundry Local is an on-device AI inference solution offering performance, privacy, customization, and cost advantages. It integrates seamlessly into your existing workflows and applications through an intuitive CLI, SDK, and REST API. Key features On-Device Inference: Run models locally on your own hardware, reducing your costs while keeping all your data on your device. Model Customization: Select from preset models or use your own to meet specific requirements and use cases. Cost Efficiency: Eliminate recurring cloud service costs by using your existing hardware, making AI more accessible. Seamless Integration: Connect with your applications through an SDK, API endpoints, or the CLI, with easy scaling to Azure AI Foundry as your needs grow. How to Join: Register to Join the Azure AI Foundry Discord Community Event 14th Oct 2025 9am Pacific Time UTC−08:00 Unlock Accelerated Local LLM Development Discover how Foundry Local can enhance your development process and explore the possibilities for building robust LLM applications. Whether you're a seasoned AI developer or just getting started, this session is your chance to get hands-on insights into the innovative world of Azure AI Foundry. Event Highlights: An in-depth overview of the Foundry Local CLI and SDK. Interactive demo with step-by-step examples. Best practices for local AI Inference and models Transitioning your local development to cloud solutions or vice-versa Why Attend? Gain expert insights into Foundry Local, and ask questions about using Foundry Local Network with fellow AI professionals and developers in the Azure AI Foundry community. Enhance your AI development skills with practical examples. Stay at the forefront of LLM application development. Speakers Product Manager Foundry Local Maanav Dalal Product Manager |Foundry Local Microsoft Maanav Dalal is a PM on the AI Frameworks team. He's super inquisitive about the ways you use AI in daily life, so be encouraged to strike up a conversation with him about that. LinkedIn ProfileJS AI Build‑a‑thon: Wrapping Up an Epic June 2025!
After weeks of building, testing, and learning — we’re officially wrapping up the first-ever JS AI Build-a-thon 🎉. This wasn't your average coding challenge. This was a hands-on journey where JavaScript and TypeScript developers dove deep into real-world AI concepts — from local GenAI prototyping to building intelligent agents and deploying production-ready apps. Whether you joined from the start or hopped on midway, you built something that matters — and that’s worth celebrating. Replay the Journey No worries if you joined late or want to revisit any part of the journey. The JS AI Build-a-thon was designed to let you learn at your own pace, so whether you're starting now or polishing up your final project, here’s your complete quest map: Build-a-thon set up guide: https://aka.ms/JSAIBuildathonSetup Quest 1: 🔧 Build your first GenAI app locally with GitHub Models 👉🏽 https://aka.ms/JSAIBuildathonQuest1 Quest 2: ☁️ Move your AI prototype to Azure AI Foundry 👉🏽 https://aka.ms/JSAIBuildathonQuest Quest 3: 🎨 Add a chat UI using Vite + Lit 👉🏽 https://aka.ms/JSAIBuildathonQuest3 Quest 4: 📄 Enhance your app with RAG (Chat with Your Data) 👉🏽 https://aka.ms/JSAIBuildathonQuest4 Quest 5: 🧠 Add memory and context to your AI app 👉🏽 https://aka.ms/JSAIBuildathonQuest5 Quest 6: ⚙️ Build your first AI Agent using AI Foundry 👉🏽 https://aka.ms/JSAIBuildathonQuest6 Quest 7: 🧩 Equip your agent with tools from an MCP server 👉🏽 https://aka.ms/JSAIBuildathonQuest7 Quest 8: 💬 Ground your agent with real-time search using Bing 👉🏽 https://aka.ms/JSAIBuildathonQuest8 Quest 9: 🚀 Build a real-world AI project with full-stack templates 👉🏽 https://aka.ms/JSAIBuildathonQuest9 Link to our space in the AI Discord Community: https://aka.ms/JSAIonDiscord Project Submission Guidelines 📌 Quest 9 is where it all comes together. Participants chose a problem, picked a template, customized it, submitted it, and rallied their community for support! 🏅 Claim Your Badge! Whether you completed select quests or went all the way, we celebrate your learning. If you participated in the June 2025 JS AI Build-a-thon, make sure to Submit the Participation Form to receive your participation badge recognizing your commitment to upskilling in AI with JavaScript/ TypeScript. What’s Next? We’re not done. In fact, we’re just getting started. We’re already cooking up JS AI Build-a-thon v2, which will introduce: Running everything locally with Foundry Local Real-world RAG with vector databases Advanced agent patterns with remote MCPs And much more based on your feedback Want to shape what comes next? Drop your ideas in the participation form and in our Discord. In the meantime, add these resources to your JavaScript + AI Dev Pack: 🔗 Microsoft for JavaScript developers 📚 Generative AI for Beginners with JavaScript Wrap-Up This build-a-thon showed what’s possible when developers are empowered to learn by doing. You didn’t just follow tutorials — you shipped features, connected services, and created working AI experiences. We can’t wait to see what you build next. 👉 Bookmark the repo 👉 Join the community on Join the Azure AI Foundry Discord Server! 👉 Stay building Until next time — keep coding, keep shipping!Exploring Azure AI Model Inference: A Comprehensive Guide
Azure AI model inference provides access to a wide range of flagship models from leading providers such as AI21 Labs, Azure OpenAI, Cohere, Core42, DeepSeek, Meta, Microsoft, Mistral AI, and NTT Data https://learn.microsoft.com/azure/ai-foundry/model-inference/concepts/models . These models can be consumed as APIs, allowing you to integrate advanced AI capabilities into your applications seamlessly. Model Families and Their Capabilities Azure AI Foundry categorises its models into several families, each offering unique capabilities: AI21 Labs: Known for the Jamba family models, which are production-grade large language models (LLMs) using AI21's hybrid Mamba-Transformer architecture. These models support chat completions, tool calling, and multiple languages including English, French, Spanish, Portuguese, German, Arabic, and Hebrew. https://learn.microsoft.com/azure/ai-foundry/model-inference/concepts/models Azure OpenAI: Offers diverse models designed for tasks such as reasoning, problem-solving, natural language understanding, and code generation. These models support text and image inputs, multiple languages, and tool calling https://learn.microsoft.com/azure/ai-foundry/model-inference/concepts/models Cohere: Provides models for embedding and command tasks, supporting multilingual capabilities and various response formats https://learn.microsoft.com/azure/ai-foundry/model-inference/concepts/models Core42: Features the Jais-30B-chat model, designed for chat completions https://learn.microsoft.com/azure/ai-foundry/model-inference/concepts/models DeepSeek: Includes models like DeepSeek-V3 and DeepSeek-R1, focusing on advanced AI tasks https://learn.microsoft.com/azure/ai-foundry/model-inference/concepts/models Meta: Offers the Llama series models, which are instruction-tuned for various AI tasks https://learn.microsoft.com/azure/ai-foundry/model-inference/concepts/models Microsoft: Provides the Phi series models, supporting multimodal instructions and vision tasks https://learn.microsoft.com/azure/ai-foundry/model-inference/concepts/models Mistral AI: Features models like Ministral-3B and Mistral-large, designed for high-performance AI tasks https://learn.microsoft.com/azure/ai-foundry/model-inference/concepts/models NTT Data: Offers the Tsuzumi-7b model, focusing on specific AI capabilities https://learn.microsoft.com/azure/ai-foundry/model-inference/concepts/models Deployment and Integration Azure AI model inference supports global standard deployment, ensuring consistent throughput and performance. Models can be deployed in various configurations, including regional deployments and sovereign clouds such as Azure Government, Azure Germany, and Azure China https://learn.microsoft.com/azure/ai-foundry/model-inference/concepts/models To integrate these models into your applications, you can use the Azure AI model inference API, which supports multiple programming languages including Python, C#, JavaScript, and Java. This flexibility allows you to deploy models multiple times under different configurations, providing a robust and scalable solution for your AI needs https://learn.microsoft.com/en-us/azure/ai-foundry/model-inference/overview Conclusion Azure AI model inference in Azure AI Foundry offers a comprehensive solution for integrating advanced AI models into your applications. With a wide range of models from leading providers, flexible deployment options, and robust API support, Azure AI Foundry empowers you to leverage cutting-edge AI capabilities without the complexity of hosting and managing the infrastructure. Explore the Azure AI model catalog today and unlock the potential of AI for your business. Join the Conversation on Azure AI Foundry Discussions! Have ideas, questions, or insights about AI? Don't keep them to yourself! Share your thoughts, engage with experts, and connect with a community that’s shaping the future of artificial intelligence. 👉 Click here to join the discussion!Make Phi-4-mini-reasoning more powerful with industry reasoning on edge devices
In situations with limited computing, Phi-4-mini-reasoning will is an excellent model choice. We can use Microsoft Olive or Apple MLX Framework to quantize Phi-4-mini-reasoning and deploy it on edge terminals such as IoT, Laotop and mobile devices. Quantization In order to solve the problem that the model is difficult to deploy directly to specific hardware, we need to reduce the complexity of the model through model quantization. Undertaking the quantization process will inevitably cause precision loss. Quantize Phi-4-mini-reasoning using Microsoft Olive Microsoft Olive is an AI model optimization toolkit for ONNX Runtime. Given a model and target hardware, Olive (short for Onnx LIVE) will combine the most appropriate optimization techniques to output the most efficient ONNX model for inference in the cloud or on the edge. We can combine Microsoft Olive and Phi-4-mini-reasoning on Azure AI Foundry's Model Catalog to quantize Phi-4-mini-reasoning to an ONNX format model. Create your Notebook on Azure ML Install Microsoft Olive pip install git+https://github.com/Microsoft/Olive.git Quantize using Microsoft Olive olive auto-opt --model_name_or_path {Azure Model Catalog path ,such as azureml://registries/azureml/models/Phi-4-mini-reasoning/versions/1 }--device cpu --provider CPUExecutionProvider --use_model_builder --precision int4 --output_path ./phi-4-14b-reasoninig-onnx --log_level 1 Register your quantized Model ! python -m mlx_lm.generate --model ./phi-4-mini-reasoning --adapter-path ./adapters --max-token 4096 --prompt "A 54-year-old construction worker with a long history of smoking presents with swelling in his upper extremity and face, along with dilated veins in this region. After conducting a CT scan and venogram of the neck, what is the most likely diagnosis for the cause of these symptoms?" --extra-eos-token "<|end|>" Download to local and run Download the onnx model to local device ml_client.models.download("phi-4-mini-onnx-int4-cpu", 1) Running onnx model with onnxruntime-genai Install onnxruntime-genai (This is CPU version) pip install onnxruntime-genai Run it import onnxruntime_genai as og model_folder = "Your ONNX Model Path" model = og.Model(model_folder) tokenizer = og.Tokenizer(model) tokenizer_stream = tokenizer.create_stream() search_options = {} search_options['max_length'] = 32768 chat_template = "<|user|>{input}<|end|><|assistant|>" text = 'A school arranges dormitories for students. If each dormitory accommodates 5 people, 4 people cannot live there; if each dormitory accommodates 6 people, one dormitory only has 4 people, and two dormitories are empty. Find the number of students in this grade and the number of dormitories.' prompt = f'{chat_template.format(input=text)}' input_tokens = tokenizer.encode(prompt) params = og.GeneratorParams(model) params.set_search_options(**search_options) generator = og.Generator(model, params) generator.append_tokens(input_tokens) while not generator.is_done(): generator.generate_next_token() new_token = generator.get_next_tokens()[0] print(tokenizer_stream.decode(new_token), end='', flush=True) Get Notebook from Phi Cookbook : https://aka.ms/phicookbook Quantize Phi-4-mini-reasoning model using Apple MLX Install Apple MLX Framework pip install -U mlx-lm Convert Phi-4-mini-reasoning model through Apple MLX quantization python -m mlx_lm.convert --hf-path {Phi-4-mini-reasoning Hugging face id} -q Run Phi-4-mini-reasoning with Apple MLX in terminal python -m mlx_lm.generate --model ./mlx_model --max-token 2048 --prompt "A school arranges dormitories for students. If each dormitory accommodates 5 people, 4 people cannot live there; if each dormitory accommodates 6 people, one dormitory only has 4 people, and two dormitories are empty. Find the number of students in this grade and the number of dormitories." --extra-eos-token "<|end|>" --temp 0.0 Fine-tuning We can fine-tune the CoT data of different scenarios to enable Phi-4-mini-reasoning to have reasoning capabilities for different scenarios. Here we use the Medical CoT data from a public Huggingface datasets as our example (this is just an example. If you need rigorous medical reasoning, please seek more professional data support) We can fine-tune our CoT data in Azure ML Fine-tune Phi-4-mini-reasoning using Microsoft Olive in Azure ML Note- Please use Standard_NC24ads_A100_v4 to run this sample Get Data from Hugging face datasets pip install datasets run this script to get train data from datasets import load_dataset def formatting_prompts_func(examples): inputs = examples["Question"] cots = examples["Complex_CoT"] outputs = examples["Response"] texts = [] for input, cot, output in zip(inputs, cots, outputs): text = prompt_template.format(input, cot, output) + "<|end|>" # text = prompt_template.format(input, cot, output) + "<|endoftext|>" texts.append(text) return { "text": texts, } # Create the English dataset dataset = load_dataset("FreedomIntelligence/medical-o1-reasoning-SFT","en", split = "train",trust_remote_code=True) dataset = dataset.map(formatting_prompts_func, batched = True,remove_columns=["Question", "Complex_CoT", "Response"]) dataset.to_json("en_dataset.jsonl") Fine-tuning with Microsoft Olive olive finetune \ --method lora \ --model_name_or_path {Azure Model Catalog path , azureml://registries/azureml/models/Phi-4-mini-reasoning/versions/1} \ --trust_remote_code \ --data_name json \ --data_files ./en_dataset.jsonl \ --train_split "train[:16000]" \ --eval_split "train[16000:19700]" \ --text_field "text" \ --max_steps 100 \ --logging_steps 10 \ --output_path {Your fine-tuning save path} \ --log_level 1 Convert the model to ONNX with Microsoft Olive olive capture-onnx-graph \ --model_name_or_path {Azure Model Catalog path , azureml://registries/azureml/models/Phi-4-mini-reasoning/versions/1} \ --adapter_path {Your fine-tuning adapter path} \ --use_model_builder \ --output_path {Your save onnx path} \ --log_level 1 olive generate-adapter \ --model_name_or_path {Your save onnx path} \ --output_path {Your save onnx adapter path} \ --log_level 1 Run the model with onnxruntime-genai-cuda Install onnxruntime-genai-cuda SDK import onnxruntime_genai as og import numpy as np import os model_folder = "./models/phi-4-mini-reasoning/adapter-onnx/model/" model = og.Model(model_folder) adapters = og.Adapters(model) adapters.load('./models/phi-4-mini-reasoning/adapter-onnx/model/adapter_weights.onnx_adapter', "en_medical_reasoning") tokenizer = og.Tokenizer(model) tokenizer_stream = tokenizer.create_stream() search_options = {} search_options['max_length'] = 200 search_options['past_present_share_buffer'] = False search_options['temperature'] = 1 search_options['top_k'] = 1 prompt_template = """<|user|>{}<|end|><|assistant|><think>""" question = """ A 33-year-old woman is brought to the emergency department 15 minutes after being stabbed in the chest with a screwdriver. Given her vital signs of pulse 110\/min, respirations 22\/min, and blood pressure 90\/65 mm Hg, along with the presence of a 5-cm deep stab wound at the upper border of the 8th rib in the left midaxillary line, which anatomical structure in her chest is most likely to be injured? """ prompt = prompt_template.format(question, "") input_tokens = tokenizer.encode(prompt) params = og.GeneratorParams(model) params.set_search_options(**search_options) generator = og.Generator(model, params) generator.set_active_adapter(adapters, "en_medical_reasoning") generator.append_tokens(input_tokens) while not generator.is_done(): generator.generate_next_token() new_token = generator.get_next_tokens()[0] print(tokenizer_stream.decode(new_token), end='', flush=True) inference model with onnxruntime-genai cuda olive finetune \ --method lora \ --model_name_or_path {Azure Model Catalog path , azureml://registries/azureml/models/Phi-4-mini-reasoning/versions/1} \ --trust_remote_code \ --data_name json \ --data_files ./en_dataset.jsonl \ --train_split "train[:16000]" \ --eval_split "train[16000:19700]" \ --text_field "text" \ --max_steps 100 \ --logging_steps 10 \ --output_path {Your fine-tuning save path} \ --log_level 1 Fine-tune Phi-4-mini-reasoning using Apple MLX locally on MacOS Note- we recommend that you use devices with a minimum of 64GB Memory and Apple Silicon devices Get the DataSet from Hugging face datasets pip install datasets run this script to get train and valid data from datasets import load_dataset prompt_template = """<|user|>{}<|end|><|assistant|><think>{}</think>{}<|end|>""" def formatting_prompts_func(examples): inputs = examples["Question"] cots = examples["Complex_CoT"] outputs = examples["Response"] texts = [] for input, cot, output in zip(inputs, cots, outputs): # text = prompt_template.format(input, cot, output) + "<|end|>" text = prompt_template.format(input, cot, output) + "<|endoftext|>" texts.append(text) return { "text": texts, } dataset = load_dataset("FreedomIntelligence/medical-o1-reasoning-SFT","en", trust_remote_code=True) split_dataset = dataset["train"].train_test_split(test_size=0.2, seed=200) train_dataset = split_dataset['train'] validation_dataset = split_dataset['test'] train_dataset = train_dataset.map(formatting_prompts_func, batched = True,remove_columns=["Question", "Complex_CoT", "Response"]) train_dataset.to_json("./data/train.jsonl") validation_dataset = validation_dataset.map(formatting_prompts_func, batched = True,remove_columns=["Question", "Complex_CoT", "Response"]) validation_dataset.to_json("./data/valid.jsonl") Fine-tuning with Apple MLX python -m mlx_lm.lora --model ./phi-4-mini-reasoning --train --data ./data --iters 100 Running the model ! python -m mlx_lm.generate --model ./phi-4-mini-reasoning --adapter-path ./adapters --max-token 4096 --prompt "A 54-year-old construction worker with a long history of smoking presents with swelling in his upper extremity and face, along with dilated veins in this region. After conducting a CT scan and venogram of the neck, what is the most likely diagnosis for the cause of these symptoms?" --extra-eos-token "<|end|>" Get Notebook from Phi Cookbook : https://aka.ms/phicookbook We hope this sample has inspired you to use Phi-4-mini-reasoning and Phi-4-reasoning to complete industry reasoning for our own scenarios. Related resources Phi4-mini-reasoning Tech Report https://aka.ms/phi4-mini-reasoning/techreport Phi-4-Mini-Reasoning technical Report· microsoft/Phi-4-mini-reasoning Phi-4-mini-reasoning on Azure AI Foundry https://aka.ms/phi4-mini-reasoning/azure Phi-4 Reasoning Blog https://aka.ms/phi4-mini-reasoning/blog Phi Cookbook https://aka.ms/phicookbook Showcasing Phi-4-Reasoning: A Game-Changer for AI Developers | Microsoft Community Hub Models Phi-4 Reasoning https://huggingface.co/microsoft/Phi-4-reasoning Phi-4 Reasoning Plus https://huggingface.co/microsoft/Phi-4-reasoning-plus Phi-4-mini-reasoning Hugging Face https://aka.ms/phi4-mini-reasoning/hf Phi-4-mini-reasoning on Azure AI Foundry https://aka.ms/phi4-mini-reasoning/azure Microsoft (Microsoft) Models on Hugging Face Phi-4 Reasoning Models Azure AI Foundry Models Access Phi-4-reasoning models Phi Models at Azure AI Foundry Models Phi Models on Hugging Face Phi Models on GitHub Marketplace ModelsBuild AI Agents with MCP Tool Use in Minutes with AI Toolkit for VSCode
We’re excited to announce Agent Builder, the newest evolution of what was formerly known as Prompt Builder, now reimagined and supercharged for intelligent app development. This powerful tool in AI Toolkit enables you to create, iterate, and optimize agents—from prompt engineering to tool integration—all in one seamless workflow. Whether you're designing simple chat interactions or complex task-performing agents with tool access, Agent Builder simplifies the journey from idea to integration. Why Agent Builder? Agent Builder is designed to empower developers and prompt engineers to: 🚀 Generate starter prompts with natural language 🔁 Iterate and refine prompts based on model responses 🧩 Break down tasks with prompt chaining and structured outputs 🧪 Test integrations with real-time runs and tool use such as MCP servers 💻 Generate production-ready code for rapid app development And a lot of features are coming soon, stay tuned for: 📝 Use variables in prompts �� Run agent with test cases to test your agent easily 📊 Evaluate the accuracy and performance of your agent with built-in or your custom metrics ☁️ Deploy your agent to cloud Build Smart Agents with Tool Use (MCP Servers) Agents can now connect to external tools through MCP (Model Control Protocol) servers, enabling them to perform real-world actions like querying a database, accessing APIs, or executing custom logic. Connect to an Existing MCP Server To use an existing MCP server in Agent Builder: In the Tools section, select + MCP Server. Choose a connection type: Command (stdio) – run a local command that implements the MCP protocol HTTP (server-sent events) – connect to a remote server implementing the MCP protocol If the MCP server supports multiple tools, select the specific tool you want to use. Enter your prompts and click Run to test the agent's interaction with the tool. This integration allows your agents to fetch live data or trigger custom backend services as part of the conversation flow. Build and Scaffold a New MCP Server Want to create your own tool? Agent Builder helps you scaffold a new MCP server project: In the Tools section, select + MCP Server. Choose MCP server project. Select your preferred programming language: Python or TypeScript. Pick a folder to create your server project. Name your project and click Create. Agent Builder generates a scaffolded implementation of the MCP protocol that you can extend. Use the built-in VS Code debugger: Press F5 or click Debug in Agent Builder Test with prompts like: System: You are a weather forecast professional that can tell weather information based on given location. User: What is the weather in Shanghai? Agent Builder will automatically connect to your running server and show the response, making it easy to test and refine the tool-agent interaction. AI Sparks from Prototype to Production with AI Toolkit Building AI-powered applications from scratch or infusing intelligence into existing systems? AI Sparks is your go-to webinar series for mastering the AI Toolkit (AITK) from foundational concepts to cutting-edge techniques. In this bi-weekly, hands-on series, we’ll cover: 🚀SLMs & Local Models – Test and deploy AI models and applications efficiently on your own terms locally, to edge devices or to the cloud 🔍 Embedding Models & RAG – Supercharge retrieval for smarter applications using existing data. 🎨 Multi-Modal AI – Work with images, text, and beyond. 🤖 Agentic Frameworks – Build autonomous, decision-making AI systems. Watch on Demand Share your feedback Get started with the latest version, share your feedback, and let us know how these new features help you in your AI development journey. As always, we’re here to listen, collaborate, and grow alongside our amazing user community. Thank you for being a part of this journey—let’s build the future of AI together! Join our Microsoft Azure AI Foundry Discord channel to continue the discussion 🚀Week 3 . Microsoft Agents Hack Online Events and Readiness Resources
Readiness and skilling events for Week 3: Microsoft AI Agents Hack Register Now at https://aka.ms/agentshack https://aka.ms/agentshack 2025 is the year of AI agents! But what exactly is an agent, and how can you build one? Whether you're a seasoned developer or just starting out, this FREE three-week virtual hackathon is your chance to dive deep into AI agent development. Register Now: https://aka.ms/agentshack 🔥 Learn from expert-led sessions streamed live on YouTube, covering top frameworks like Semantic Kernel, Autogen, the new Azure AI Agents SDK and the Microsoft 365 Agents SDK. Week 3: April 21st-25th LIVE & ONDEMAND Day/Time Topic Track 4/21 12:00 PM PT Knowledge-augmented agents with LlamaIndex.TS JS 4/22 06:00 AM PT Building a AI Agent with Prompty and Azure AI Foundry Python 4/22 09:00 AM PT Real-time Multi-Agent LLM solutions with SignalR, gRPC, and HTTP based on Semantic Kernel C# 4/22 10:30 AM PT Learn Live: Fundamentals of AI agents on Azure - 4/22 12:00 PM PT Demystifying Agents: Building an AI Agent from Scratch on Your Own Data using Azure SQL C# 4/22 03:00 PM PT VoiceRAG: talk to your data Python 4/23 09:00 AM PT Building Multi-Agent Apps on top of Azure PostgreSQL Python 4/23 12:00 PM PT Agentic RAG with reflection Python 4/23 03:00 PM PT Multi-source data patterns for modern RAG apps C# 4/24 06:00 AM PT Engineering agents that Think, Act, and Govern themselves C# 4/24 09:00 AM PT Extending AI Agents with Azure Functions Python, C# 4/24 12:00 PM PT Build real time voice agents with Azure Communication Services Python 🌟 Join the Conversation on Azure AI Foundry Discussions! 🌟 Have ideas, questions, or insights about AI? Don't keep them to yourself! Share your thoughts, engage with experts, and connect with a community that’s shaping the future of artificial intelligence. 🧠✨ 👉 Click here to join the discussion!Week 2 . Microsoft Agents Hack Online Events and Readiness Resources
https://aka.ms/agentshack 2025 is the year of AI agents! But what exactly is an agent, and how can you build one? Whether you're a seasoned developer or just starting out, this FREE three-week virtual hackathon is your chance to dive deep into AI agent development. Register Now: https://aka.ms/agentshack 🔥 Learn from expert-led sessions streamed live on YouTube, covering top frameworks like Semantic Kernel, Autogen, the new Azure AI Agents SDK and the Microsoft 365 Agents SDK. Week 2 Events: April 14th-18th Day/Time Topic Track 4/14 08:00 AM PT Building custom engine agents with Azure AI Foundry and Visual Studio Code Copilots 4/15 07:00 AM PT Your first AI Agent in JS with Azure AI Agent Service JS 4/15 09:00 AM PT Building Agentic Applications with AutoGen v0.4 Python 4/15 12:00 PM PT AI Agents + .NET Aspire C# 4/15 03:00 PM PT Prototyping AI Agents with GitHub Models Python 4/16 04:00 AM PT Multi-agent AI apps with Semantic Kernel and Azure Cosmos DB C# 4/16 06:00 AM PT Building declarative agents with Microsoft Copilot Studio & Teams Toolkit Copilots 4/16 07:00 AM PT Prompting is the New Scripting: Meet GenAIScript JS 4/16 09:00 AM PT Building agents with an army of models from the Azure AI model catalog Python 4/16 12:00 PM PT Multi-Agent API with LangGraph and Azure Cosmos DB Python 4/16 03:00 PM PT Mastering Agentic RAG Python 4/17 06:00 AM PT Build your own agent with OpenAI, .NET, and Copilot Studio C# 4/17 09:00 AM PT Building smarter Python AI agents with code interpreters Python 4/17 12:00 PM PT Building Java AI Agents using LangChain4j and Dynamic Sessions Java 4/17 03:00 PM PT Agentic Voice Mode Unplugged Python1.3KViews0likes0CommentsAI Toolkit for Visual Studio Code Now Supports NVIDIA NIM Microservices for RTX AI PCs
AI Toolkit now supports NVIDIA NIM™ microservice-based foundation models for inference testing in the model playground and advanced features like bulk run, evaluation and building prompts. This collaboration helps AI Engineers streamline development processes with foundational AI models. About AI Toolkit AI Toolkit is a VS Code extension for AI engineers to build, deploy, and manage AI solutions. It includes model and prompt-centric features that allow users to explore and test different AI models, create and evaluate prompts, and perform model finetuning, all from within VS Code. Since its preview launch in 2024, AI Toolkit has helped developers worldwide learn about generative AI models and start building AI solutions. NVIDIA NIM Microservices This January, NVIDIA announced that state-of-the-art AI models spanning language, speech, animation and vision capabilities - offered as NVIDIA NIM microservices - can now run locally on NVIDIA RTX AI PCs. These microservices prepackage optimized AI models with all the necessary runtime components for deployment across NVIDIA GPUs. Developers can now develop and deploy anywhere with the same unified experience and software stack across RTX AI PCs and workstations to the cloud. Developers can jumpstart their AI development journey by downloading and running NIM containers quickly on Windows 11 PCs with GeForce RTX GPUs using Windows Subsystem for Linux (WSL2). The Power of Collaboration Integrating AI Toolkit with NIM provides AI engineers with a more cohesive and efficient workflow: Seamlessly integrate AI Toolkit with NIM to create a unified development environment without the need to switch context. Users can access any NIM supported models from AI Toolkit. Leverage the combined capabilities of both tools to streamline workflows and accelerate AI solution development process around foundation AI models, from within VS Code. How to get started Follow these steps to begin leveraging the power of NIM on AI Toolkit: Download and install the latest version of AI Toolkit for VS Code. Install NIM pre-requisites on RTX PCs using the instructions here. Select a NIM model from the model catalog on AI Toolkit and load it in Playground. Optionally, you can also add your NIM model hosted in the cloud to AI Toolkit by URL Explore NIM models from the playground Start developing prompts with new NIM models in AI Toolkit! Looking Forward We invite you to explore the possibilities of this integration and take your development projects to new heights! Try AI Toolkit today – and please continue sharing your feedback. Stay tuned for more updates and detailed tutorials on how to maximize the benefits of this exciting new collaboration. Together, we are shaping the future of AI development!