agents
32 TopicsAMA Spotlight: Build Smarter with Azure Developer CLI 'AZD'
Weekly AMA 'Ask Me Anything': Build Smarter with Azure Developer CLI Calling all AI engineers, developers, and builders of the future, this is your backstage pass to the tools shaping scalable, agentic AI deployments. Join Kristen Womack, Product Manager for the AzureAzure Developer CLI (azd) Developer CLI (azd), and the engineering team behind azd for a live Ask Me Anything session every Thursday at 12:30pm PT in the Azure AI Foundry Discord. Whether you're: đ§ Orchestrating multi-agent systems đŚ Deploying LLM-powered apps with Azure AI Foundry đ Navigating least-privilege infrastructure setups đ ď¸ Debugging and optimizing reproducible workflows âŚthis AMA is your chance to connect directly with the team building the CLI that powers it all. đĄ Why Join? Real-time answers from the azd engineers and product team Deployment walkthroughs for Foundry templates, from chatbots to document processors Tips for CI/CD, debugging, and reproducibility in enterprise environments Community-first mindset: bring your feedback, challenges, and ideas Kristen Womack brings deep insight into developer experience and product strategy; this is a rare opportunity to learn from the source and shape the future of AI tooling. đ§ Get Ready Before you join: Install azd đ Install Guide Explore Kristenâs work đ www.kristenwomack.io Join the Discord đ Azure AI Foundry Community đď¸ Weekly Schedule đ§ Thursdays at 12:30pm PT đ Azure AI Foundry Discord Bring your questions. Bring your curiosity. Build with the best.LangChain v1 is now generally available!
Today LangChain v1 officially launches and marks a new era for the popular AI agent library. The new version ushers in a more opinionated, streamlined, and extensible foundation for building agentic LLM applications. In this post we'll breakdown of whatâs new, what changed, and what âgeneral availabilityâ means in practice. Join Microsoft Developer Advocates, Marlene Mhangami and Yohan Lasorsa, to see live demos of the new API and find out more about what JavaScript and Python developers need to know about v1. Register for this event here. Why v1? The Motivation Behind the Redesign The number of abstractions in LangChain had grown over the years to include chains, agents, tools, wrappers, prompt helpers and more, which, while powerful, introduced complexity and fragmentation. As model APIs evolve (multimodal inputs, richer structured output, tool-calling semantics), LangChain needed a cleaner, more consistent core to ensure production ready stability. In v1: All existing chains and agent abstractions in the old LangChain are deprecated; they are replaced by a single high-level agent abstraction built on LangGraph internals. LangGraph becomes the foundational runtime for durable, stateful, orchestrated execution. LangChain now emphasizes being the âfast path to agentsâ that doesnât hide but builds upon LangGraph. The internal message format has been upgraded to support standard content blocks (e.g. text, reasoning, citations, tool calls) across model providers, decoupling âcontentâ from raw strings. Namespace cleanup: the langchain package now focuses tightly on core abstractions (agents, models, messages, tools), while legacy patterns are moved into langchain-classic (or equivalents). Whatâs New & Noteworthy for Developers Here are key changes developers should pay attention to: 1. create_agent becomes the default API The create_agent function is now the idiomatic way to spin up agents in v1. It replaces older constructs (e.g. create_react_agent) with a clearer, more modular API that is middleware-centric. You can now compose middleware around model calls, tool calls, before/after hooks, error handling, etc. 2. Standard content blocks & normalized message model Responses from models are no longer opaque strings. Instead, they carry structured content_blocks which classify parts of the output (e.g. âtextâ, âreasoningâ, âcitationâ, âtool_callâ). If needed for backward compatibility or client serialization, you can opt in to serializing those blocks back into the .content field by setting output_version="v1". 3. Multimodal and richer model inputs / outputs LangChain now supports more than just text-based interactions. Models can accept and return files, images, video, etc., and the message format reflects this flexibility. This upgrade prepares us well for the next generation of models with mixed modalities (vision, audio, etc.). 4. Middleware hooks, runtime context, and finer control Because create_agent is designed as a pluggable pipeline, developers can now inject logic before/after model calls, tool calls, error recoveries, fallback strategies, request transformations, and more. New middleware such as retry, fallback, call limits, and context editing have been added. The notion of a runtime and context object accompanies each agent execution, making it easier to carry state or metadata through the pipeline. 5. Simplified, leaner namespace & migration path Many formerly top-level modules or helper classes have been removed or relocated to langchain-classic (or similarly stamped âlegacyâ) to declutter the main API surface. A migration guide is available to help projects transition from v0 to v1. While v1 is now the main line, older v0 is still documented and maintained for compatibility. What âGeneral Availabilityâ Means (and Doesnât) v1 is production-ready, after months of testing the alpha version The stable v0 release line remains supported for those unwilling or unable to migrate immediately. Breaking changes in public APIs will be accompanied by version bumps (i.e. minor version increments) and deprecation notices. The roadmap anticipates minor versions every 2â3 months (with patch releases more frequently). Because the field of LLM applications is evolving rapidly, the team expects continued iterations in v1âeven in GA modeâwith users encouraged to surface feedback, file issues, and adopt the migration path. (This is in line with the philosophy stated in docs.) Developer Callouts & Suggested Steps Here are practical tips to get developers onboard: Try the new API Now! LangChain Azure AI and Azure OpenAI have migrated to LangChain v1 and are ready to test! Try out our getting started sample here. Learn more about using LangChain and Azure AI: Python: https://docs.langchain.com/oss/python/integrations/providers/azure_ai JavaScript: https://docs.langchain.com/oss/javascript/integrations/providers/microsoft Join us for a Live Stream on Wednesday 22 October 2025 Join Microsoft Developer Advocates Marlene Mhangami and Yohan Lasorsa for a livestream this Wednesday to see live demos and find out more about what JavaScript and Python developers need to know about v1. Register for this event here.Study Buddy: Learning Data Science and Machine Learning with an AI Sidekick
If you've ever wished for a friendly companion to guide you through the world of data science and machine learning, you're not alone. As part of the "For Beginners" curriculum, I recently built a Study Buddy Agent, an AI-powered assistant designed to help learners explore data science interactively, intuitively, and joyfully. Why a Study Buddy? Learning something new can be overwhelming, especially when you're navigating complex topics like machine learning, statistics, or Python programming. The Study Buddy Agent is here to change that. It brings the curriculum to life by answering questions, offering explanations, and nudging learners toward deeper understanding, all in a conversational format. Think of it as your AI-powered lab partner: always available, never judgmental, and endlessly curious. Built with chatmodes, Powered by Purpose The agent lives inside a .chatmodes file in the https://github.com/microsoft/Data-Science-For-Beginners/blob/main/.github/chatmodes/study-mode.chatmode.md. This file defines how the agent behaves, what tone it uses, and how it interacts with learners. I designed it to be friendly, encouraging, and beginner-firstâjust like the curriculum itself. Itâs not just about answering questions. The Study Buddy is trained to: Reinforce key concepts from the curriculum Offer hints and nudges when learners get stuck Encourage exploration and experimentation Celebrate progress and milestones Whatâs Under the Hood? The agent uses GitHub Copilot's chatmode, which allows developers to define custom behaviors for AI agents. By aligning the agentâs responses with the curriculumâs learning objectives, we ensure that learners stay on track while enjoying the flexibility of conversational learning. How You Can Use It YouTube Video here: Study Buddy - Data Science AI Sidekick Clone the repo: Head to the https://github.com/microsoft/Data-Science-For-Beginners and clone it locally or use Codespaces. Open the GitHub Copilot Chat, and select Study Buddy: This will activate the Study Buddy. Start chatting: Ask questions, explore topics, and let the agent guide you. Whatâs Next? This is just the beginning. Iâm exploring ways to: Expand the agent to other beginner curriculums (Web Dev, AI, IoT) Integrate feedback loops so learners can shape the agentâs evolution Final Thoughts In my role, I believe learning should be inclusive, empowering, and fun. The Study Buddy Agent is a small step toward that vision, a way to make data science feel less like a mountain and more like a hike with a good friend. Try it out, share your feedback, and letâs keep building tools that make learning magical. Join us on Discord to share your feedback.Introducing the Microsoft Agent Framework
Introducing the Microsoft Agent Framework: A Unified Foundation for AI Agents and Workflows The landscape of AI development is evolving rapidly, and Microsoft is at the forefront with the release of the Microsoft Agent Framework an open-source SDK designed to empower developers to build intelligent, multi-agent systems with ease and precision. Whether you're working in .NET or Python, this framework offers a unified, extensible foundation that merges the best of Semantic Kernel and AutoGen, while introducing powerful new capabilities for agent orchestration and workflow design. Introducing Microsoft Agent Framework: The Open-Source Engine for Agentic AI Apps | Azure AI Foundry Blog Introducing Microsoft Agent Framework | Microsoft Azure Blog Why Another Agent Framework? Both Semantic Kernel and AutoGen have pioneered agentic development, Semantic Kernel with its enterprise-grade features and AutoGen with its research-driven abstractions. The Microsoft Agent Framework is the next generation of both, built by the same teams to unify their strengths: AutoGenâs simplicity in multi-agent orchestration. Semantic Kernelâs robustness in thread-based state management, telemetry, and type safety. New capabilities like graph-based workflows, checkpointing, and human-in-the-loop support This convergence means developers no longer have to choose between experimentation and production. The Agent Framework is designed to scale from single-agent prototypes to complex, enterprise-ready systems Core Capabilities AI Agents AI agents are autonomous entities powered by LLMs that can process user inputs, make decisions, call tools and MCP servers, and generate responses. They support providers like Azure OpenAI, OpenAI, and Azure AI, and can be enhanced with: Agent threads for state management. Context providers for memory. Middleware for action interception. MCP clients for tool integration Use cases include customer support, education, code generation, research assistance, and moreâespecially where tasks are dynamic and underspecified. Workflows Workflows are graph-based orchestrations that connect multiple agents and functions to perform complex, multi-step tasks. They support: Type-based routing Conditional logic Checkpointing Human-in-the-loop interactions Multi-agent orchestration patterns (sequential, concurrent, hand-off, Magentic) Workflows are ideal for structured, long-running processes that require reliability and modularity. Developer Experience The Agent Framework is designed to be intuitive and powerful: Installation: Python: pip install agent-framework .NET: dotnet add package Microsoft.Agents.AI Integration: Works with Foundry SDK, MCP SDK, A2A SDK, and M365 Copilot Agents Samples and Manifests: Explore declarative agent manifests and code samples Learning Resources: Microsoft Learn modules AI Agents for Beginners AI Show demos Azure AI Foundry Discord community Migration and Compatibility If you're currently using Semantic Kernel or AutoGen, migration guides are available to help you transition smoothly. The framework is designed to be backward-compatible where possible, and future updates will continue to support community contributions via the GitHub repository. Important Considerations The Agent Framework is in public preview. Feedback and issues are welcome on the GitHub repository. When integrating with third-party servers or agents, review data sharing practices and compliance boundaries carefully. The Microsoft Agent Framework marks a pivotal moment in AI development, bringing together research innovation and enterprise readiness into a single, open-source foundation. Whether you're building your first agent or orchestrating a fleet of them, this framework gives you the tools to do it safely, scalably, and intelligently. Ready to get started? Download the SDK, explore the documentation, and join the community shaping the future of AI agents.From Cloud to Chip: Building Smarter AI at the Edge with Windows AI PCs
As AI engineers, weâve spent years optimizing models for the cloud, scaling inference, wrangling latency, and chasing compute across clusters. But the frontier is shifting. With the rise of Windows AI PCs and powerful local accelerators, the edge is no longer a constraint itâs now a canvas. Whether you're deploying vision models to industrial cameras, optimizing speech interfaces for offline assistants, or building privacy-preserving apps for healthcare, Edge AI is where real-world intelligence meets real-time performance. Why Edge AI, Why Now? Edge AI isnât just about running models locally, itâs about rethinking the entire lifecycle: - Latency: Decisions in milliseconds, not round-trips to the cloud. - Privacy: Sensitive data stays on-device, enabling HIPAA/GDPR compliance. - Resilience: Offline-first apps that donât break when the network does. - Cost: Reduced cloud compute and bandwidth overhead. With Windows AI PCs powered by Intel and Qualcomm NPUs and tools like ONNX Runtime, DirectML, and Olive, developers can now optimize and deploy models with unprecedented efficiency. What Youâll Learn in Edge AI for Beginners The Edge AI for Beginners curriculum is a hands-on, open-source guide designed for engineers ready to move from theory to deployment. Multi-Language Support This content is available in over 48 languages, so you can read and study in your native language. What You'll Master This course takes you from fundamental concepts to production-ready implementations, covering: Small Language Models (SLMs) optimized for edge deployment Hardware-aware optimization across diverse platforms Real-time inference with privacy-preserving capabilities Production deployment strategies for enterprise applications Why EdgeAI Matters Edge AI represents a paradigm shift that addresses critical modern challenges: Privacy & Security: Process sensitive data locally without cloud exposure Real-time Performance: Eliminate network latency for time-critical applications Cost Efficiency: Reduce bandwidth and cloud computing expenses Resilient Operations: Maintain functionality during network outages Regulatory Compliance: Meet data sovereignty requirements Edge AI Edge AI refers to running AI algorithms and language models locally on hardware, close to where data is generated without relying on cloud resources for inference. It reduces latency, enhances privacy, and enables real-time decision-making. Core Principles: On-device inference: AI models run on edge devices (phones, routers, microcontrollers, industrial PCs) Offline capability: Functions without persistent internet connectivity Low latency: Immediate responses suited for real-time systems Data sovereignty: Keeps sensitive data local, improving security and compliance Small Language Models (SLMs) SLMs like Phi-4, Mistral-7B, Qwen and Gemma are optimized versions of larger LLMs, trained or distilled for: Reduced memory footprint: Efficient use of limited edge device memory Lower compute demand: Optimized for CPU and edge GPU performance Faster startup times: Quick initialization for responsive applications They unlock powerful NLP capabilities while meeting the constraints of: Embedded systems: IoT devices and industrial controllers Mobile devices: Smartphones and tablets with offline capabilities IoT Devices: Sensors and smart devices with limited resources Edge servers: Local processing units with limited GPU resources Personal Computers: Desktop and laptop deployment scenarios Course Modules & Navigation Course duration. 10 hours of content Module Topic Focus Area Key Content Level Duration đ 00 Introduction to EdgeAI Foundation & Context EdgeAI Overview ⢠Industry Applications ⢠SLM Introduction ⢠Learning Objectives Beginner 1-2 hrs đ 01 EdgeAI Fundamentals Cloud vs Edge AI comparison EdgeAI Fundamentals ⢠Real World Case Studies ⢠Implementation Guide ⢠Edge Deployment Beginner 3-4 hrs đ§ 02 SLM Model Foundations Model families & architecture Phi Family ⢠Qwen Family ⢠Gemma Family ⢠BitNET ⢠ΟModel ⢠Phi-Silica Beginner 4-5 hrs đ 03 SLM Deployment Practice Local & cloud deployment Advanced Learning ⢠Local Environment ⢠Cloud Deployment Intermediate 4-5 hrs âď¸ 04 Model Optimization Toolkit Cross-platform optimization Introduction ⢠Llama.cpp ⢠Microsoft Olive ⢠OpenVINO ⢠Apple MLX ⢠Workflow Synthesis Intermediate 5-6 hrs đ§ 05 SLMOps Production Production operations SLMOps Introduction ⢠Model Distillation ⢠Fine-tuning ⢠Production Deployment Advanced 5-6 hrs đ¤ 06 AI Agents & Function Calling Agent frameworks & MCP Agent Introduction ⢠Function Calling ⢠Model Context Protocol Advanced 4-5 hrs đť 07 Platform Implementation Cross-platform samples AI Toolkit ⢠Foundry Local ⢠Windows Development Advanced 3-4 hrs đ 08 Foundry Local Toolkit Production-ready samples Sample applications (see details below) Expert 8-10 hrs Each module includes Jupyter notebooks, code samples, and deployment walkthroughs, perfect for engineers who learn by doing. Developer Highlights - đ§ Olive: Microsoft's optimization toolchain for quantization, pruning, and acceleration. - đ§Š ONNX Runtime: Cross-platform inference engine with support for CPU, GPU, and NPU. - đŽ DirectML: GPU-accelerated ML API for Windows, ideal for gaming and real-time apps. - đĽď¸ Windows AI PCs: Devices with built-in NPUs for low-power, high-performance inference. Local AI: Beyond the Edge Local AI isnât just about inference, itâs about autonomy. Imagine agents that: - Learn from local context - Adapt to user behavior - Respect privacy by design With tools like Agent Framework, Azure AI Foundry and Windows Copilot Studio, and Foundry Local developers can orchestrate local agents that blend LLMs, sensors, and user preferences, all without cloud dependency. Try It Yourself Ready to get started? Clone the Edge AI for Beginners GitHub repo, run the notebooks, and deploy your first model to a Windows AI PC or IoT devices Whether you're building smart kiosks, offline assistants, or industrial monitors, this curriculum gives you the scaffolding to go from prototype to production.Level up your Python Gen AI Skills from our free nine-part YouTube series!
Want to learn how to use generative AI models in your Python applications? We're putting on a series of nine live streams, in both English and Spanish, all about generative AI. We'll cover large language models, embedding models, vision models, introduce techniques like RAG, function calling, and structured outputs, and show you how to build Agents and MCP servers. Plus we'll talk about AI safety and evaluations, to make sure all your models and applications are producing safe outputs. đ Register for the entire series. In addition to the live streams, you can also join a weekly office hours in our AI Discord to ask any questions that don't get answered in the chat. You can also scroll down to learn about each live stream and register for individual sessions. See you in the streams! đđť Large Language Models 7 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor Join us for the first session in our Python + AI series! In this session, we'll talk about Large Language Models (LLMs), the models that power ChatGPT and GitHub Copilot. We'll use Python to interact with LLMs using popular packages like the OpenAI SDK and Langchain. We'll experiment with prompt engineering and few-shot examples to improve our outputs. We'll also show how to build a full stack app powered by LLMs, and explain the importance of concurrency and streaming for user-facing AI apps. Vector embeddings 8 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor In our second session of the Python + AI series, we'll dive into a different kind of model: the vector embedding model. A vector embedding is a way to encode a text or image as an array of floating point numbers. Vector embeddings make it possible to perform similarity search on many kinds of content. In this session, we'll explore different vector embedding models, like the OpenAI text-embedding-3 series, with both visualizations and Python code. We'll compare distance metrics, use quantization to reduce vector size, and try out multimodal embedding models. Retrieval Augmented Generation 9 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor In our fourth Python + AI session, we'll explore one of the most popular techniques used with LLMs: Retrieval Augmented Generation. RAG is an approach that sends context to the LLM so that it can provide well-grounded answers for a particular domain. The RAG approach can be used with many kinds of data sources like CSVs, webpages, documents, databases. In this session, we'll walk through RAG flows in Python, starting with a simple flow and culminating in a full-stack RAG application based on Azure AI Search. Vision models 14 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor Our third stream in the Python + AI series is all about vision models! Vision models are LLMs that can accept both text and images, like GPT 4o and 4o-mini. You can use those models for image captioning, data extraction, question-answering, classification, and more! We'll use Python to send images to vision models, build a basic chat-on-images app, and build a multimodal search engine. Structured outputs 15 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor In our fifth stream of the Python + AI series, we'll discover how to get LLMs to output structured responses that adhere to a schema. In Python, all we need to do is define a @dataclass or a Pydantic BaseModel, and we get validated output that meets our needs perfectly. We'll focus on the structured outputs mode available in OpenAI models, but you can use similar techniques with other model providers. Our examples will demonstrate the many ways you can use structured responses, like entity extraction, classification, and agentic workflows. Quality and safety 16 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor Now that we're more than halfway through our Python + AI series, we're covering a crucial topic: how to use AI safely, and how to evaluate the quality of AI outputs. There are multiple mitigation layers when working with LLMs: the model itself, a safety system on top, the prompting and context, and the application user experience. Our focus will be on Azure tools that make it easier to put safe AI systems into production. We'll show how to configure the Azure AI Content Safety system when working with Azure AI models, and how to handle those errors in Python code. Then we'll use the Azure AI Evaluation SDK to evaluate the safety and quality of the output from our LLM. Tool calling 21 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor Now that we're more than halfway through our Python + AI series, we're covering a crucial topic: how to use AI safely, and how to evaluate the quality of AI outputs. There are multiple mitigation layers when working with LLMs: the model itself, a safety system on top, the prompting and context, and the application user experience. Our focus will be on Azure tools that make it easier to put safe AI systems into production. We'll show how to configure the Azure AI Content Safety system when working with Azure AI models, and how to handle those errors in Python code. Then we'll use the Azure AI Evaluation SDK to evaluate the safety and quality of the output from our LLM. AI agents 22 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor For the penultimate session of our Python + AI series, we're building AI agents! We'll use many of the most popular Python AI agent frameworks: Langgraph, Semantic Kernel, Autogen, Pydantic AI, and more. Our agents will start simple and then ramp up in complexity, demonstrating different architectures like hand-offs, round-robin, supervisor, graphs, and ReAct. Model Context Protocol 23 October, 2025 | 5:00 PM - 6:00 PM (UTC) Coordinated Universal Time Register for the stream on Reactor In the final session of our Python + AI series, we're diving into the hottest technology of 2025: MCP, Model Context Protocol. This open protocol makes it easy to extend AI agents and chatbots with custom functionality, to make them more powerful and flexible. We'll show how to use the official Python FastMCP SDK to build an MCP server running locally and consume that server from chatbots like GitHub Copilot. Then we'll build our own MCP client to consume the server. Finally, we'll discover how easy it is to point popular AI agent frameworks like Langgraph, Pydantic AI, and Semantic Kernel at MCP servers. With great power comes great responsibility, so we will briefly discuss the many security risks that come with MCP, both as a user and developer.Modernizing legacy Java project using GitHub Copilot App Modernization
In this blog, we explore how the GitHub Copilot App Modernization â Upgrade for Java can streamline the process of modernizing legacy Java applications. To put the tool to the test, we selected the widely known Spring Boot Pet Clinic project, originally built with Java 8 and Spring Boot 2.x. Using GitHub Copilot Upgrade for Javaâs modernization capabilities, we successfully upgraded the project to Java 21 and Spring Boot 3.4.x. This post highlights the upgrade journey, showcases the toolâs capabilities, and shares practical tips and lessons learned along the way.Foundry Fridays: Your Front-Row Seat to Azure AI Innovation
đĽ Foundry Fridays: Your Front-Row Seat to Azure AI Innovation Are you ready to go beyond the blog posts and docs and get your questions answered directly by the minds behind Azure AI? Then mark your calendars for Foundry Fridays a weekly Ask Me Anything (AMA) series hosted on the Azure AI Foundry Discord. Every Friday at 1:30 PM ET, the Azure AI team opens the floor to developers, researchers, and enthusiasts for a 30-minute live AMA with the experts building the future of AI at Microsoft. Whether you're curious about model fine-tuning, local inference, agentic workflows, or the latest in open-source toolingâFoundry Fridays is where the real-time insights happen. đď¸ Why Join Foundry Fridays? Direct Access to Experts: Ask your questions live to Principal PMs, researchers, and engineers from the Azure AI Foundry team. Fresh Topics Weekly: Each session spotlights a new theme from model routing and MCP registries to SAMBA architectures and AI agent security. Community-Driven: These arenât lecturesâtheyâre conversations. Bring your curiosity, share your feedback, and help shape the future of Azure AI. No Slides, Just Substance: Itâs raw, real, and refreshingly unscripted. Youâll hear whatâs working, whatâs coming, and whatâs still being figured out. Episode is hosted by community leaders like Nitya Narasimhan and Lee Stott, who guide the conversation and ensure your questions get the spotlight they deserve you can watch all the Monday Model Series on Demand at https://aka.ms/model-mondays and get ready for Season 3 of Model Mondays every Monday at 1.30pm ET. đ Why It Matters Foundry Fridays isnât just another event itâs a community catalyst. Join our communty hear from experts and share your experiences of using Azure AI Tools and Services. đ How to Join Join the Discord: aka.ms/model-mondays/discord Find the AMA: Head to the Events #community-calls and #model-mondays channel or check the pinned events. Ask Anything: Come with questions, ideas, or just listen in. No registration required. Want a sneak peek at whatâs coming? Check the Foundry Fridays schedule or follow the Azure AI Foundry Blog for recaps and resources. đŹ Final Thoughts Whether you're building with Azure AI, exploring open-source models, or just curious about whatâs nextâFoundry Fridays is your chance to connect, learn, and grow with the community. So grab your headphones, fire up Discord, and letâs build the future of AIâtogether. đď¸ Fridays | 1:30 PM ET đ Azure AI Foundry Discord đ Join NowHow to Master GitHub Copilot: Build, Prompt, Deploy Smarter
Mastering GitHub Copilot: Build, Prompt, Deploy Smarter is a free, hands-on workshop designed to help developers go beyond autocomplete and unlock the true power of AI-assisted coding. Instead of toy examples, this course walks you through real-world software engineering challenges: messy codebases, multi-language projects, cloud deployments, and legacy system upgrades. Youâll learn practical skills like prompt engineering, advanced Copilot features, and AI pair programming techniques that make you faster, sharper, and more creative. Whether youâre a junior developer or a seasoned architect, mastering GitHub Copilot will help you: Reduce cognitive load and focus on system design Accelerate onboarding for new engineers Write cleaner, more consistent code Automate repetitive tasks to free up time for innovation AI coding tools like GitHub Copilot are no longer optionalâtheyâre essential. This workshop gives you the skills to collaborate with Copilot effectively and stay competitive in the age of AI-powered development.1.1KViews0likes0CommentsUse Copilot and MCP to query Microsoft Learn Docs
Are you ready to take your Azure development workflow to the next level? In this post, weâll walk through how to use GitHub Copilot in Agent Modeâpaired with MCP (Model Context Protocol) serversâto get trusted, grounded answers from Microsoft Learn Docs, right inside your coding workspace. Whether youâre tired of switching tabs to search documentation or want to ensure your AI assistantâs answers are always accurate, this guide will show you how to streamline your workflow and boost your productivity.