vs code
98 TopicsStudy Buddy: Learning Data Science and Machine Learning with an AI Sidekick
If you've ever wished for a friendly companion to guide you through the world of data science and machine learning, you're not alone. As part of the "For Beginners" curriculum, I recently built a Study Buddy Agent, an AI-powered assistant designed to help learners explore data science interactively, intuitively, and joyfully. Why a Study Buddy? Learning something new can be overwhelming, especially when you're navigating complex topics like machine learning, statistics, or Python programming. The Study Buddy Agent is here to change that. It brings the curriculum to life by answering questions, offering explanations, and nudging learners toward deeper understanding, all in a conversational format. Think of it as your AI-powered lab partner: always available, never judgmental, and endlessly curious. Built with chatmodes, Powered by Purpose The agent lives inside a .chatmodes file in the https://github.com/microsoft/Data-Science-For-Beginners/blob/main/.github/chatmodes/study-mode.chatmode.md. This file defines how the agent behaves, what tone it uses, and how it interacts with learners. I designed it to be friendly, encouraging, and beginner-first—just like the curriculum itself. It’s not just about answering questions. The Study Buddy is trained to: Reinforce key concepts from the curriculum Offer hints and nudges when learners get stuck Encourage exploration and experimentation Celebrate progress and milestones What’s Under the Hood? The agent uses GitHub Copilot's chatmode, which allows developers to define custom behaviors for AI agents. By aligning the agent’s responses with the curriculum’s learning objectives, we ensure that learners stay on track while enjoying the flexibility of conversational learning. How You Can Use It YouTube Video here: Study Buddy - Data Science AI Sidekick Clone the repo: Head to the https://github.com/microsoft/Data-Science-For-Beginners and clone it locally or use Codespaces. Open the GitHub Copilot Chat, and select Study Buddy: This will activate the Study Buddy. Start chatting: Ask questions, explore topics, and let the agent guide you. What’s Next? This is just the beginning. I’m exploring ways to: Expand the agent to other beginner curriculums (Web Dev, AI, IoT) Integrate feedback loops so learners can shape the agent’s evolution Final Thoughts In my role, I believe learning should be inclusive, empowering, and fun. The Study Buddy Agent is a small step toward that vision, a way to make data science feel less like a mountain and more like a hike with a good friend. Try it out, share your feedback, and let’s keep building tools that make learning magical. Join us on Discord to share your feedback.AMA: Azure AI Foundry Voice Live API: Build Smarter, Faster Voice Agents
Join us LIVE in the Azure AI Foundry Discord on the 14th October, 2025, 10am PT to learn more about Voice Live API Voice is no longer a novelty, it's the next-gen interface between humans and machines. From automotive assistants to educational tutors, voice-driven agents are reshaping how we interact with technology. But building seamless, real-time voice experiences has often meant stitching together a patchwork of services: STT, GenAI, TTS, avatars, and more. Until now. Introducing Azure AI Foundry Voice Live API Launched into general availability on October 1, 2025, the Azure AI Foundry Voice Live API is a game-changer for developers building voice-enabled agents. It unifies the entire voice stack—speech-to-text, generative AI, text-to-speech, avatars, and conversational enhancements, into a single, streamlined interface. That means: ⚡ Lower latency 🧠 Smarter interactions 🛠️ Simplified development 📈 Scalable deployment Whether you're prototyping a voice bot for customer support or deploying a full-stack assistant in production, Voice Live API accelerates your journey from idea to impact. Ask Me Anything: Deep Dive with the CoreAI Speech Team Join us for a live AMA session where you can engage directly with the engineers behind the API: 🗓️ Date: 14th Oct 2025 🕒 Time: 10am PT 📍 Location: https://aka.ms/foundry/discord See the EVENTS 🎤 Speakers: Qinying Liao, Principal Program Manager, CoreAI Speech Jan Gorgen, Senior Program Manager, CoreAI Speech They’ll walk through real-world use cases, demo the API in action, and answer your toughest questions, from latency optimization to avatar integration. Who Should Attend? This AMA is designed for: AI engineers building multimodal agents Developers integrating voice into enterprise workflows Researchers exploring conversational UX Foundry users looking to scale voice prototypes Why It Matters Voice Live API isn’t just another endpoint, it’s a foundation for building natural, responsive, and production-ready voice agents. With Azure AI Foundry’s orchestration and deployment tools, you can: Skip the glue code Focus on experience design Deploy with confidence across platforms Bring Your Questions Curious about latency benchmarks? Want to know how avatars sync with TTS? Wondering how to integrate with your existing Foundry workflows? This is your chance to ask the team directly.Introducing the Microsoft Agent Framework
Introducing the Microsoft Agent Framework: A Unified Foundation for AI Agents and Workflows The landscape of AI development is evolving rapidly, and Microsoft is at the forefront with the release of the Microsoft Agent Framework an open-source SDK designed to empower developers to build intelligent, multi-agent systems with ease and precision. Whether you're working in .NET or Python, this framework offers a unified, extensible foundation that merges the best of Semantic Kernel and AutoGen, while introducing powerful new capabilities for agent orchestration and workflow design. Introducing Microsoft Agent Framework: The Open-Source Engine for Agentic AI Apps | Azure AI Foundry Blog Introducing Microsoft Agent Framework | Microsoft Azure Blog Why Another Agent Framework? Both Semantic Kernel and AutoGen have pioneered agentic development, Semantic Kernel with its enterprise-grade features and AutoGen with its research-driven abstractions. The Microsoft Agent Framework is the next generation of both, built by the same teams to unify their strengths: AutoGen’s simplicity in multi-agent orchestration. Semantic Kernel’s robustness in thread-based state management, telemetry, and type safety. New capabilities like graph-based workflows, checkpointing, and human-in-the-loop support This convergence means developers no longer have to choose between experimentation and production. The Agent Framework is designed to scale from single-agent prototypes to complex, enterprise-ready systems Core Capabilities AI Agents AI agents are autonomous entities powered by LLMs that can process user inputs, make decisions, call tools and MCP servers, and generate responses. They support providers like Azure OpenAI, OpenAI, and Azure AI, and can be enhanced with: Agent threads for state management. Context providers for memory. Middleware for action interception. MCP clients for tool integration Use cases include customer support, education, code generation, research assistance, and more—especially where tasks are dynamic and underspecified. Workflows Workflows are graph-based orchestrations that connect multiple agents and functions to perform complex, multi-step tasks. They support: Type-based routing Conditional logic Checkpointing Human-in-the-loop interactions Multi-agent orchestration patterns (sequential, concurrent, hand-off, Magentic) Workflows are ideal for structured, long-running processes that require reliability and modularity. Developer Experience The Agent Framework is designed to be intuitive and powerful: Installation: Python: pip install agent-framework .NET: dotnet add package Microsoft.Agents.AI Integration: Works with Foundry SDK, MCP SDK, A2A SDK, and M365 Copilot Agents Samples and Manifests: Explore declarative agent manifests and code samples Learning Resources: Microsoft Learn modules AI Agents for Beginners AI Show demos Azure AI Foundry Discord community Migration and Compatibility If you're currently using Semantic Kernel or AutoGen, migration guides are available to help you transition smoothly. The framework is designed to be backward-compatible where possible, and future updates will continue to support community contributions via the GitHub repository. Important Considerations The Agent Framework is in public preview. Feedback and issues are welcome on the GitHub repository. When integrating with third-party servers or agents, review data sharing practices and compliance boundaries carefully. The Microsoft Agent Framework marks a pivotal moment in AI development, bringing together research innovation and enterprise readiness into a single, open-source foundation. Whether you're building your first agent or orchestrating a fleet of them, this framework gives you the tools to do it safely, scalably, and intelligently. Ready to get started? Download the SDK, explore the documentation, and join the community shaping the future of AI agents.From Cloud to Chip: Building Smarter AI at the Edge with Windows AI PCs
As AI engineers, we’ve spent years optimizing models for the cloud, scaling inference, wrangling latency, and chasing compute across clusters. But the frontier is shifting. With the rise of Windows AI PCs and powerful local accelerators, the edge is no longer a constraint it’s now a canvas. Whether you're deploying vision models to industrial cameras, optimizing speech interfaces for offline assistants, or building privacy-preserving apps for healthcare, Edge AI is where real-world intelligence meets real-time performance. Why Edge AI, Why Now? Edge AI isn’t just about running models locally, it’s about rethinking the entire lifecycle: - Latency: Decisions in milliseconds, not round-trips to the cloud. - Privacy: Sensitive data stays on-device, enabling HIPAA/GDPR compliance. - Resilience: Offline-first apps that don’t break when the network does. - Cost: Reduced cloud compute and bandwidth overhead. With Windows AI PCs powered by Intel and Qualcomm NPUs and tools like ONNX Runtime, DirectML, and Olive, developers can now optimize and deploy models with unprecedented efficiency. What You’ll Learn in Edge AI for Beginners The Edge AI for Beginners curriculum is a hands-on, open-source guide designed for engineers ready to move from theory to deployment. Multi-Language Support This content is available in over 48 languages, so you can read and study in your native language. What You'll Master This course takes you from fundamental concepts to production-ready implementations, covering: Small Language Models (SLMs) optimized for edge deployment Hardware-aware optimization across diverse platforms Real-time inference with privacy-preserving capabilities Production deployment strategies for enterprise applications Why EdgeAI Matters Edge AI represents a paradigm shift that addresses critical modern challenges: Privacy & Security: Process sensitive data locally without cloud exposure Real-time Performance: Eliminate network latency for time-critical applications Cost Efficiency: Reduce bandwidth and cloud computing expenses Resilient Operations: Maintain functionality during network outages Regulatory Compliance: Meet data sovereignty requirements Edge AI Edge AI refers to running AI algorithms and language models locally on hardware, close to where data is generated without relying on cloud resources for inference. It reduces latency, enhances privacy, and enables real-time decision-making. Core Principles: On-device inference: AI models run on edge devices (phones, routers, microcontrollers, industrial PCs) Offline capability: Functions without persistent internet connectivity Low latency: Immediate responses suited for real-time systems Data sovereignty: Keeps sensitive data local, improving security and compliance Small Language Models (SLMs) SLMs like Phi-4, Mistral-7B, Qwen and Gemma are optimized versions of larger LLMs, trained or distilled for: Reduced memory footprint: Efficient use of limited edge device memory Lower compute demand: Optimized for CPU and edge GPU performance Faster startup times: Quick initialization for responsive applications They unlock powerful NLP capabilities while meeting the constraints of: Embedded systems: IoT devices and industrial controllers Mobile devices: Smartphones and tablets with offline capabilities IoT Devices: Sensors and smart devices with limited resources Edge servers: Local processing units with limited GPU resources Personal Computers: Desktop and laptop deployment scenarios Course Modules & Navigation Course duration. 10 hours of content Module Topic Focus Area Key Content Level Duration 📖 00 Introduction to EdgeAI Foundation & Context EdgeAI Overview • Industry Applications • SLM Introduction • Learning Objectives Beginner 1-2 hrs 📚 01 EdgeAI Fundamentals Cloud vs Edge AI comparison EdgeAI Fundamentals • Real World Case Studies • Implementation Guide • Edge Deployment Beginner 3-4 hrs 🧠 02 SLM Model Foundations Model families & architecture Phi Family • Qwen Family • Gemma Family • BitNET • μModel • Phi-Silica Beginner 4-5 hrs 🚀 03 SLM Deployment Practice Local & cloud deployment Advanced Learning • Local Environment • Cloud Deployment Intermediate 4-5 hrs ⚙️ 04 Model Optimization Toolkit Cross-platform optimization Introduction • Llama.cpp • Microsoft Olive • OpenVINO • Apple MLX • Workflow Synthesis Intermediate 5-6 hrs 🔧 05 SLMOps Production Production operations SLMOps Introduction • Model Distillation • Fine-tuning • Production Deployment Advanced 5-6 hrs 🤖 06 AI Agents & Function Calling Agent frameworks & MCP Agent Introduction • Function Calling • Model Context Protocol Advanced 4-5 hrs 💻 07 Platform Implementation Cross-platform samples AI Toolkit • Foundry Local • Windows Development Advanced 3-4 hrs 🏭 08 Foundry Local Toolkit Production-ready samples Sample applications (see details below) Expert 8-10 hrs Each module includes Jupyter notebooks, code samples, and deployment walkthroughs, perfect for engineers who learn by doing. Developer Highlights - 🔧 Olive: Microsoft's optimization toolchain for quantization, pruning, and acceleration. - 🧩 ONNX Runtime: Cross-platform inference engine with support for CPU, GPU, and NPU. - 🎮 DirectML: GPU-accelerated ML API for Windows, ideal for gaming and real-time apps. - 🖥️ Windows AI PCs: Devices with built-in NPUs for low-power, high-performance inference. Local AI: Beyond the Edge Local AI isn’t just about inference, it’s about autonomy. Imagine agents that: - Learn from local context - Adapt to user behavior - Respect privacy by design With tools like Agent Framework, Azure AI Foundry and Windows Copilot Studio, and Foundry Local developers can orchestrate local agents that blend LLMs, sensors, and user preferences, all without cloud dependency. Try It Yourself Ready to get started? Clone the Edge AI for Beginners GitHub repo, run the notebooks, and deploy your first model to a Windows AI PC or IoT devices Whether you're building smart kiosks, offline assistants, or industrial monitors, this curriculum gives you the scaffolding to go from prototype to production.How to Master GitHub Copilot: Build, Prompt, Deploy Smarter
Mastering GitHub Copilot: Build, Prompt, Deploy Smarter is a free, hands-on workshop designed to help developers go beyond autocomplete and unlock the true power of AI-assisted coding. Instead of toy examples, this course walks you through real-world software engineering challenges: messy codebases, multi-language projects, cloud deployments, and legacy system upgrades. You’ll learn practical skills like prompt engineering, advanced Copilot features, and AI pair programming techniques that make you faster, sharper, and more creative. Whether you’re a junior developer or a seasoned architect, mastering GitHub Copilot will help you: Reduce cognitive load and focus on system design Accelerate onboarding for new engineers Write cleaner, more consistent code Automate repetitive tasks to free up time for innovation AI coding tools like GitHub Copilot are no longer optional—they’re essential. This workshop gives you the skills to collaborate with Copilot effectively and stay competitive in the age of AI-powered development.Use Copilot and MCP to query Microsoft Learn Docs
Are you ready to take your Azure development workflow to the next level? In this post, we’ll walk through how to use GitHub Copilot in Agent Mode—paired with MCP (Model Context Protocol) servers—to get trusted, grounded answers from Microsoft Learn Docs, right inside your coding workspace. Whether you’re tired of switching tabs to search documentation or want to ensure your AI assistant’s answers are always accurate, this guide will show you how to streamline your workflow and boost your productivity.Talk to your data: PostgreSQL gets a voice in VS Code
Talk to your data: PostgreSQL gets a voice in VS Code As product roadmaps accelerate to meet new business needs, developer teams are struggling to maintain productivity, some even reporting that they need to use six or more tools to do their job. To address this, Microsoft has released an improved PostgreSQL extension to aid developers working with PostgreSQL in Visual Studio (VS) Code streamline their workflows and increase productivity. PostgreSQL and VS Code are top choices among developers, but, until now, using PostgreSQL and VS Code together meant constantly Alt-Tabbing or copying queries between windows. The new extension, now in public preview, brings full database management and query capabilities into VS Code, right alongside your code. You can connect to PostgreSQL databases, run queries, explore schemas, and even get AI assistance to talk to your data. The extension integrates with Azure Database for PostgreSQL and supports local databases, so it works whether you’re working with a cloud database or a container on your machine. Plus, features like context-aware IntelliSense and a built-in AI Copilot agent help you write and optimize SQL queries, so you can focus on building your application instead of wrestling with disconnected tools.Step-by-Step: How to Setup Copilot Chat in VS Code
Copilot Chat is an AI-powered chatbot leveraging OpenAI's GPT-4, designed to enhance your coding workflow. Learn how to set up Copilot Chat step by step in Visual Studio Code (VS Code). Benefit from personalized and flexible coding environments, code analysis, automated unit test generation, and bug fixes. Prerequisites include an active GitHub account and the latest version of VS Code. Elevate your coding efficiency to new heights with Copilot Chat.108KViews7likes8CommentsModel Mondays S2:E7 · AI-Assisted Azure Development
Welcome to Episode 7! This week, we explore how AI is transforming Azure development. We’ll break down two key tools—Azure MCP Server and GitHub Copilot for Azure—and see how they make working with Azure resources easier for everyone. We’ll also look at a real customer story from SightMachine, showing how AI streamlines manufacturing operations.239Views0likes0CommentsLevel Up Your Python Game with Generative AI Free Livestream Series This October!
If you've been itching to go beyond basic Python scripts and dive into the world of AI-powered applications, this is your moment. Join Pamela Fox and Gwyneth Peña-Siguenza Gwthrilled to announce a brand-new free livestream series running throughout October, focused on Python + Generative AI and this time, we’re going even deeper with Agents and the Model Context Protocol (MCP). Whether you're just starting out with LLMs or you're refining your multi-agent workflows, this series is designed to meet you where you are and push your skills to the next level. 🧠 What You’ll Learn Each session is packed with live coding, hands-on demos, and real-world examples you can run in GitHub Codespaces. Here's a taste of what we’ll cover: 🎥 Why Join? Live coding: No slides-only sessions — we build together, step by step. All code shared: Clone and run in GitHub Codespaces or your local setup. Community support: Join weekly office hours and our AI Discord for Q&A and deeper dives. Modular learning: Each session stands alone, so you can jump in anytime. 🔗 Register for the full series 🌍 ¿Hablas español? We’ve got you covered! Gwyneth Peña-Siguenza will be leading a parallel series in Spanish, covering the same topics with localized examples and demos. 🔗 Regístrese para la serie en español Whether you're building your first AI app or architecting multi-agent systems, this series is your launchpad. Come for the code, stay for the community — and leave with a toolkit that scales. Let’s build something brilliant together. 💡 Join the discussions and share your exprience at the Azure AI Discord Community