ai
983 TopicsSharePoint at 25: The knowledge platform for Copilot and agents
Join us for a global digital event to celebrate the 25th birthday of SharePoint! Gear up for an exciting look at SharePoint’s historic moments along with an exclusive look ahead to the next chapter of SharePoint’s AI future! You will discover how SharePoint’s new content AI capabilities and intelligent experiences will transform the way people create, manage, and collaborate. Be sure to stick around after for our live Ask Microsoft Anything (AMA), where you can ask your questions about the exciting new SharePoint features directly to the product team! 🛠️ Don’t miss the SharePoint Hackathon in March 2026 Design, create, share! We are excited to invite you to a hackathon dedicated to crafting exceptional employee experiences using AI and the latest SharePoint features. More details coming soon.1.4KViews11likes6CommentsQuarterly AMA with Amir Netz (APAC) – Microsoft Fabric Partners, Don’t Miss Out!
🌟 Microsoft Partners—don’t miss your chance to connect directly with Amir Netz, CTO & Technical Fellow, at our exclusive Quarterly AMA (Ask Me Anything) session! The Fabric Engineering Connection is your gateway to insider insights, direct access to product leaders, and the latest updates on Microsoft Fabric. As a valued partner, you’ll have the opportunity to ask questions, share feedback, and learn about new features and strategies that can help you drive success for your clients and your business. This session is designed to empower partners with actionable knowledge and networking opportunities. 📅 Save the date: Americas & EMEA: Wednesday, January 21, 8–9 am PT APAC: Thursday, January 22, 1–2 am UTC / Americas & EMEA: Wednesday, January 21, 5–6 pm PT 🔗 Not a member yet? Join the Fabric Partner Community to participate: https://aka.ms/JoinFabricPartnerCommunity. Let’s build the future of data together!7Views0likes0CommentsQuarterly AMA with Amir Netz (Americas & EMEA) – Microsoft Fabric Partners, Don’t Miss Out!
🌟 Microsoft Partners—don’t miss your chance to connect directly with Amir Netz, CTO & Technical Fellow, at our exclusive Quarterly AMA (Ask Me Anything) session! The Fabric Engineering Connection is your gateway to insider insights, direct access to product leaders, and the latest updates on Microsoft Fabric. As a valued partner, you’ll have the opportunity to ask questions, share feedback, and learn about new features and strategies that can help you drive success for your clients and your business. This session is designed to empower partners with actionable knowledge and networking opportunities. 📅 Save the date: Americas & EMEA: Wednesday, January 21, 8–9 am PT APAC: Thursday, January 22, 1–2 am UTC / Americas & EMEA: Wednesday, January 21, 5–6 pm PT 🔗 Not a member yet? Join the Fabric Partner Community to participate: https://aka.ms/JoinFabricPartnerCommunity. Let’s build the future of data together!7Views0likes0CommentsWhat’s New in Microsoft EDU, Bett Edition January 2026
Welcome to our update for Microsoft Education and our special Bett 2026 edition! The Bett conference takes place in London during the week of January 21st - January 23rd, and Microsoft Education has 18 exciting updates to share! Check out the official Bett News blog here, and for our full Bett schedule and session times, be sure to check out our Microsoft EDU Bett 2026 guide. January 2026 topics: Microsoft 365 Updates for Educators Microsoft Learning Zone Microsoft 365 Updates for Students Teams EDU and OneNote EDU Updates Microsoft 365 LTI Updates Minecraft EDU 1. New Educator tools coming to the Teach Module in Microsoft 365 Unit Plans Soon educators will be able to create unit plans in Teach. Using a familiar interface, educators will be able to describe their unit, ground in existing content and educational standards, and attach any existing lesson plans. Unit plans will be created as Microsoft Word documents to facilitate easy edits and sharing. When: Preview in Spring 2026 Minecraft Lesson Plans Minecraft Education prepares students for the future workplace by helping build skills like collaboration, creative problem-solving, communication, and computational thinking. Coming soon, you will be able to create lesson plans in Teach that are fully teachable in Minecraft Education. And if you’re new to Minecraft Education, the lesson plan includes step-by-step instructions to get started. Just like the existing lesson plan tool in Teach, Minecraft Lessons can be grounded on your class details, existing content, and educational standards from 35+ countries. When: Preview in February 2026 Modify Content When: In Preview now Teach supports educators in modifying their existing teaching materials using AI-powered tools that save time and help meet the diverse needs of learners. With Modify existing content, educators can quickly adapt lessons they already use—without starting from scratch—by aligning materials to standards, differentiating instructions, adjusting reading levels, and enhancing text with supporting examples. Each modification tool accepts direct text input or file uploads from cloud storage, making it easy to transform current curriculum resources. These tools help educators maintain instructional intent while ensuring content is accessible, standards aligned, and effective for all learners. Align materials to standards Aligning instructional content to educational standards helps ensure lessons clearly support required learning goals and set the right expectations for learners. The Align to Standards tool rewrites existing lesson instructions so they reflect the intent of the selected standard—focusing on what learners should understand or be able to do—without copying the standard’s wording. Scenario: An educator has a lesson instruction for a reading activity on ecosystems. After selecting a state science standard, the educator uses Align to Standards to produce a revised instruction that emphasizes system interactions and evidence-based explanations while preserving the lesson’s original purpose. This allows the educator to strengthen alignment quickly without rewriting the lesson from scratch. Differentiate instructions Differentiation helps ensure every learner—regardless of readiness, background knowledge, or support needs—can access and engage with instructional tasks. The Differentiate Instructions tool adapts existing instructions based on specific supports an educator selects, such as adjusting reading level, including a single type of scaffold, or targeting a desired length. Because this tool is designed for single shot use, it produces a clear, accurate adaptation that adheres directly to the selected inputs. Scenario: A secondary biology educator has lab instructions written for general education learners but needs versions for learners requiring additional scaffolding. Using Differentiate Instructions, the educator quickly generates modified instructions that include step-by-step breakdowns, sentence starters, or graphic organizers—making the lab more accessible without changing the learning goal. Modify reading level Adjusting the reading level helps ensure instructional content remains accessible while preserving essential vocabulary and core concepts. The Modify reading level tool rewrites text to match a specified grade level, simplifying or increasing complexity as needed while maintaining meaning. Educators can also choose to generate a glossary with clear, age-appropriate definitions of key terms. Scenario: A social studies educator wants students to work with a primary source written at a university reading level. Using Modify reading level, the educator creates a version that maintains the document’s key ideas and important historical terms while simplifying sentence structure for lower secondary learners. By adding a glossary, students can access learner friendly definitions alongside the adapted text. Add supporting examples Concrete examples strengthen understanding by connecting abstract ideas to real world applications. The Add Supporting Examples tool enhances existing instructional content by appending relevant, accurate, and age-appropriate examples—without altering the original paragraph. Scenario: An educator teaching thermal energy transfer has a paragraph explaining that heat moves from warmer objects to cooler ones, but the concept feels abstract. Using Add Supporting Examples, the educator adds real world examples—such as a metal spoon warming in hot soup or an ice cube melting on a countertop—to help learners visualize how heat transfer works. These examples reinforce understanding and make the concept more accessible for secondary learners. Fill in the Blanks, Matching and Quizzing New Learning Activities are coming soon! We’re excited to introduce three new Learning Activities designed to make classroom experiences more dynamic and personalized: Fill in the Blanks, Matching, and Quizzes. Whether it’s completing paragraphs to strengthen comprehension, pairing terms with definitions in a timed matching game, or testing knowledge through quick self-assessments, these activities bring variety and fun to learning. Fill in the blanks creates paragraphs where learners can check their understanding by filling in missing terms. Matching is a game where learners can match terms and definitions while racing against the clock, aiming for fast completion and accuracy. And Quizzes allows students to quiz themselves and assess their comprehension. Learning Activities are available across our education products, in a standalone web app, in the Teach Module, in Teams for Education, in the Study and Learn agent and Study Guides. When: Spring 2026 Teach Module updates in Teams Classwork In Teams Classwork, you can already use Copilot to create Lesson Plans, Flashcards, and Fill in the Blank Activities. Coming this Spring, you will see the ability to create and modify more content, better matching the capabilities of Teach in the Microsoft 365 Copilot App. This includes modifying content with AI, Minecraft Lessons, and more! When: Coming soon Teach Module and Class Notebook integration We're bringing Copilot-powered creation tools directly into OneNote Class Notebook. Teachers will be able to generate Learning Activities and quizzes or modify existing content (like adjusting reading level or adding supporting examples) without leaving the page where they're already planning. When: Coming soon 2. Spark classroom engagement with Microsoft Learning Zone Educators worldwide are always looking for innovative ways to engage students, personalize learning, and support individual growth, yet limited time and resources often stand in their way. Microsoft Learning Zone, a new Windows app, empowers educators to transform any idea or resource into an interactive, personalized lesson using AI on Copilot+ PCs. The app also provides actionable insights to guide instruction and support every student’s progress. Learning Zone is now available to download from the Windows app store and included at no additional cost with all Microsoft Education licenses. Just in time for Bett 2026, Learning Zone has earned the prestigious ISTE Seal of Alignment - a recognized mark of quality, accessibility, and inclusive design. This recognition reflects our commitment to delivering meaningful, inclusive, and research-backed digital learning experiences for every learner. As noted by ISTE reviewers: "Microsoft Learning Zone saves educators valuable time while delivering personalized instruction that addresses individual learning needs." Getting started with Microsoft Learning Zone is simple. Educators begin by defining their lesson goals and preferences and can also choose to reference their teaching materials or trusted in-app resources by OpenStax. From there, AI does the heavy-lifting, generating a complete, interactive lesson with engaging content slides and a variety of practice activities. Educators can also quickly create Kahoot! quizzes using AI, bringing live classroom gamification into their lessons with just a few clicks. Learning Zone is more than content creation; it provides a full classroom-ready solution: from assignment to actionable insights. Once a lesson is created and reviewed, educators can assign it to students. Students complete lessons at their own pace, on any device, while the lesson flow adapts to their responses, helping reinforce understanding, revisit missed concepts, and build confidence over time. Educators, in turn, gain clear, actionable insights into student progress and mastery, enabling them to personalize instruction and better support every learner’s growth. Learning Zone is a classroom ready solution including management and actionable insights Learning Zone also includes an extensive library of ready-to-learn lessons developed in collaboration with leading global organizations, including the Nobel Peace Center, PBS NewsHour, the World Wildlife Fund (WWF), NASA, OpenStax, Figma, and Minecraft Education. Ready-to-learn lessons are available to educators and students on any Windows device and are a great way to inspire curiosity and bring meaningful learning of different subjects into the classroom. Ready-to-learn library in partnership with trusted global organizations Learning Zone is available today: Visit https://learningzone.microsoft.com to learn more and download the app. 3. New AI-powered tools for student learning in Microsoft 365 Study and Learn Agent Bring the interactive, conversational Study and Learn Agent in the Microsoft 365 Copilot App to your students. Available to all Microsoft EDU customers, the agent does not require an additional Copilot license. It is going into preview now, in January 2026. Join the Microsoft Education Insiders community at https://aka.ms/joinEIP and get information about getting access to the Preview. Study and Learn helps learners understand concepts, practice skills with activities like flashcards, and prepare for tests with study guides and quizzes. Additional activities including fill-in-the-blanks, matching, and others that will continue to be added. Purpose-built for learning in collaboration with learning science experts, Study and Learn aims to help foster reflective and critical thinking. Over time, it will provide a more personalized, adaptive, inclusive experience to make learning relevant and bolster motivation. When: January 2026 Preview Learning Activities app The Learning Activities Web App is now here! This web-based experience brings all your favorite activities together in one place, making it easier than ever to create, customize, and share engaging content. Whether you’re an educator designing lessons or a student building study sets, the web app offers a streamlined interface for finding or creating Flashcards and Fill in the Blanks with Matching, and Quizzes coming soon. You can easily access all your activities that you have created in other products from the web app, too. When: Available now! 4. Updates for your favorite teaching tools - Teams EDU and OneNote EDU Set AI Guidelines in Teams To help bring clarity to AI use in the classroom, AI Guidelines in Assignments allow educators to set clear expectations for when and how students can use AI—directly within the assignment experience. Educators start with a set of default, standardized AI use levels, and can apply them at the class or assignment level, with the ability to customize descriptions to reflect their school or district guidelines. These guidelines are clearly visible to students, reducing confusion and supporting responsible, transparent AI use, while also encouraging learners to use secure, education-ready Copilot. When: In Preview Q1 Add Learning Activities to Teams Assignments Learning Activities are coming to Teams Assignments and supported LMS platforms in preview, helping educators integrate interactive practice into the assignment workflows they already use. Educators can add activities such as Flashcards, Fill in the blanks, and Matching, and share resource documents that enable students to create their own learning activities within an assignment or the Classwork module. Students complete activities seamlessly within Assignments or their LMS, with progress captured as part of the assignment experience—supporting active, student driven learning while keeping setup, instruction and review in one familiar place. Students can create their own learning activities from educator-shared resources within an assignment or Classwork. When: In Preview Q1 New information literacy features in Search Progress in Teams Assignments Now students don't just gather sources—they investigate them. Four new research prompts (Source Reputation, Factual Importance, Cross-check, Source Purpose) make their thinking visible as they research. Read more about these new features in the preview blog here, and stay tuned for Microsoft Learn course updates to come. When: Available now Add Learning Zone lessons to Teams Assignments and LMS Learning Zone lessons are coming to Teams Assignments and Microsoft 365 LTI for LMS platforms in preview, allowing educators to bring interactive lessons directly into the assignments and grading workflows they already use. Educators can attach Learning Zone lessons during assignment creation, while students complete them fully embedded within Assignments or their LMS, with progress and scores automatically synchronized for review. This preview helps educators save time, reduce manual setup and grading steps, and confidently deliver interactive learning experiences—while keeping assignment creation, student work, and review all in one place. When: Preview in February Embed Learning Activities in OneNote You asked, we're building it. Soon, learners and educators alike will be able to copy a Learning Activity link, paste it into any OneNote classic page, and have it render inline – all to help folks engage without leaving the page. When: NH Spring 2026 5. Create with Copilot in your LMS In addition to supporting the new Learning Zone lessons in assignments, we are adding exciting new Create with Copilot options in Microsoft 365 LTI which bring the AI-powered capabilities of the Teach Module directly into LMS content creation workflows. From within their course, educators can use Copilot to draft lesson materials and other instructional content which is seamlessly published to the course using familiar Microsoft 365 tools. Create with Copilot is also available in LMS content editors to help educators compose content, discussion posts, and more. This includes the ability to modify existing content, if supported by the LMS platform. By embedding the creation experience where courses are designed and managed, Microsoft 365 LTI helps educators preserve instructional intent, reduce context switching, and move more quickly from planning to teaching. Microsoft 365 LTI is available to any Microsoft Education customer without additional licensing. LMS administrators can deploy the integration to an LTI 1.3 compatible LMS like Canvas, Blackboard, PowerSchool Schoology Learning, D2L/Brightspace and Moodle to get started! When: Preview in February 6. Dedicated servers coming to Minecraft Education Minecraft Education is launching a new feature that enables IT administrators and educators to run dedicated servers to host persistent worlds for use in classrooms and after-school programs, similar to Minecraft Bedrock’s dedicated servers (for the consumer version of the game). Dedicated servers enable cross-tenant gameplay, which is a gamechanger for expanding multiplayer experiences in the classroom or running Minecraft esports programs with other schools. This feature is currently in Beta to release in February for general availability for all Minecraft Education users. (Minecraft Education is available in Microsoft A3 and A5 software subscriptions for schools.) ___________________________________________________________________________________ And finally, just to recap all the news we have for you this month, here’s a quick review of all the features that are generally available or are rolling out soon: Teach Module Microsoft 365 Updates for Educators • Unit Plans – available in spring • Minecraft Lesson plans – preview in February • Modify content – align to standards. Private preview now • Modify content – modify reading level. Private preview now • Modify content – add supporting examples Private preview now • Modify content – differentiate instructions. Private preview now • Teach Module integration into OneNote Class Notebooks – preview in spring Microsoft Learning Zone • Available to download from the Windows store, at no additional cost • Provide full classroom ready solution including lesson management and insights • Teach Module, Teams Assignments and LMS integration in March Microsoft 365 Updates for Students • Study and Learn Agent – preview in late January • Learning Activities – Fill in the Blanks generally available • Learning Activities – Matching Activities in private preview now • Learning Activities – Self-quizzing available in private preview in February Teams and OneNote EDU Updates • Set expected AI use in Assignments – private preview end of January • Add Flashcards to Assignments – private preview in February • New information literacy features in Search Progress • Embed Learning Activities in OneNote – private preview in spring Copilot in your Learning Management System Dedicated Minecraft EDU servers Have any feedback to share with us? As always, we'd love to hear it! Mike Tholfsen Group Product Manager Microsoft Education926Views2likes1CommentNew Azure blog on charting AI & agent strategy with Marketplace
In a new Azure blog, Cyril Belikoff, Vice President, Commercial Cloud and AI Marketing, talks about how organizations are using Microsoft Marketplace as a central hub for discovering, buying, and deploying AI models, applications, and agents to support their AI strategy. He emphasizes that there isn’t a one-size-fits-all approach—companies can build custom solutions, buy ready-made ones, or blend both depending on their needs. Marketplace offers thousands of pre-vetted models and AI tools that integrate with existing Microsoft products (like Microsoft Foundry and Copilot), helping teams accelerate time-to-value, maintain governance, and balance agility with oversight as they adopt AI more broadly. Read the full blog and share it with your customers: Design your AI strategy with Microsoft Marketplace SolutionsAdvanced Function Calling and Multi-Agent Systems with Small Language Models in Foundry Local
Advanced Function Calling and Multi-Agent Systems with Small Language Models in Foundry Local In our previous exploration of function calling with Small Language Models, we demonstrated how to enable local SLMs to interact with external tools using a text-parsing approach with regex patterns. While that method worked, it required manual extraction of function calls from the model's output; functional but fragile. Today, I'm excited to show you something far more powerful: Foundry Local now supports native OpenAI-compatible function calling with select models. This update transforms how we build agentic AI systems locally, making it remarkably straightforward to create sophisticated multi-agent architectures that rival cloud-based solutions. What once required careful prompt engineering and brittle parsing now works seamlessly through standardized API calls. We'll build a complete multi-agent quiz application that demonstrates both the elegance of modern function calling and the power of coordinated agent systems. The full source code is available in this GitHub repository, but rather than walking through every line of code, we'll focus on how the pieces work together and what you'll see when you run it. What's New: Native Function Calling in Foundry Local As we explored in our guide to running Phi-4 locally with Foundry Local, we ran powerful language models on our local machine. The latest version now support native function calling for models specifically trained with this capability. The key difference is architectural. In our weather assistant example, we manually parsed JSON strings from the model's text output using regex patterns and frankly speaking, meticulously testing and tweaking the system prompt for the umpteenth time 🙄. Now, when you provide tool definitions to supported models, they return structured tool_calls objects that you can directly execute. Currently, this native function calling capability is available for the Qwen 2.5 family of models in Foundry Local. For this tutorial, we're using the 7B variant, which strikes a great balance between capability and resource requirements. Quick Setup Getting started requires just a few steps. First, ensure you have Foundry Local installed. On Windows, use winget install Microsoft.FoundryLocal , and on macOS, use bash brew install microsoft/foundrylocal/foundrylocal You'll need version 0.8.117 or later. Install the Python dependencies in the requirements file, then start your model. The first run will download approximately 4GB: foundry model run qwen2.5-7b-instruct-cuda-gpu If you don't have a compatible GPU, use the CPU version instead, or you can specify any other Qwen 2.5 variant that suits your hardware. I have set a DEFAULT_MODEL_ALIAS variable you can modify to use different models in utils/foundry_client.py file. Keep this terminal window open. The model needs to stay running while you develop and test your application. Understanding the Architecture Before we dive into running the application, let's understand what we're building. Our quiz system follows a multi-agent architecture where specialized agents handle distinct responsibilities, coordinated by a central orchestrator. The flow works like this: when you ask the system to generate a quiz about photosynthesis, the orchestrator agent receives your message, understands your intent, and decides which tool to invoke. It doesn't try to generate the quiz itself, instead, it calls a tool that creates a specialist QuizGeneratorAgent focused solely on producing well-structured quiz questions. Then there's another agent, reviewAgent, that reviews the quiz with you. The project structure reflects this architecture: quiz_app/ ├── agents/ # Base agent + specialist agents ├── tools/ # Tool functions the orchestrator can call ├── utils/ # Foundry client connection ├── data/ ├── quizzes/ # Generated quiz JSON files │── responses/ # User response JSON files └── main.py # Application entry point The orchestrator coordinates three main tools: generate_new_quiz, launch_quiz_interface, and review_quiz_interface. Each tool either creates a specialist agent or launches an interactive interface (Gradio), handling the complexity so the orchestrator can focus on routing and coordination. How Native Function Calling Works When you initialize the orchestrator agent in main.py, you provide two things: tool schemas that describe your functions to the model, and a mapping of function names to actual Python functions. The schemas follow the OpenAI function calling specification, describing each tool's purpose, parameters, and when it should be used. Here's what happens when you send a message to the orchestrator: The agent calls the model with your message and the tool schemas. If the model determines a tool is needed, it returns a structured tool_calls attribute containing the function name and arguments as a proper object—not as text to be parsed. Your code executes the tool, creates a message with "role": "tool" containing the result, and sends everything back to the model. The model can then either call another tool or provide its final response. The critical insight is that the model itself controls this flow through a while loop in the base agent. Each iteration represents the model examining the current state, deciding whether it needs more information, and either proceeding with another tool call or providing its final answer. You're not manually orchestrating when tools get called; the model makes those decisions based on the conversation context. Seeing It In Action Let's walk through a complete session to see how these pieces work together. When you run python main.py, you'll see the application connect to Foundry Local and display a welcome banner: Now type a request like "Generate a 5 question quiz about photosynthesis." Watch what happens in your console: The orchestrator recognized your intent, selected the generate_new_quiz tool, and extracted the topic and number of questions from your natural language request. Behind the scenes, this tool instantiated a QuizGeneratorAgent with a focused system prompt designed specifically for creating quiz JSON. The agent used a low temperature setting to ensure consistent formatting and generated questions that were saved to the data/quizzes folder. This demonstrates the first layer of the multi-agent architecture: the orchestrator doesn't generate quizzes itself. It recognizes that this task requires specialized knowledge about quiz structure and delegates to an agent built specifically for that purpose. Now request to take the quiz by typing "Take the quiz." The orchestrator calls a different tool and Gradio server is launched. Click the link to open in a browser window displaying your quiz questions. This tool demonstrates how function calling can trigger complex interactions—it reads the quiz JSON, dynamically builds a user interface with radio buttons for each question, and handles the submission flow. After you answer the questions and click submit, the interface saves your responses to the data/responses folder and closes the Gradio server. The orchestrator reports completion: The system now has two JSON files: one containing the quiz questions with correct answers, and another containing your responses. This separation of concerns is important—the quiz generation phase doesn't need to know about response collection, and the response collection doesn't need to know how quizzes are created. Each component has a single, well-defined responsibility. Now request a review. The orchestrator calls the third tool: A new chat interface opens, and here's where the multi-agent architecture really shines. The ReviewAgent is instantiated with full context about both the quiz questions and your answers. Its system prompt includes a formatted view of each question, the correct answer, your answer, and whether you got it right. This means when the interface opens, you immediately see personalized feedback: The Multi-Agent Pattern Multi-agent architectures solve complex problems by coordinating specialized agents rather than building monolithic systems. This pattern is particularly powerful for local SLMs. A coordinator agent routes tasks to specialists, each optimized for narrow domains with focused system prompts and specific temperature settings. You can use a 1.7B model for structured data generation, a 7B model for conversations, and a 4B model for reasoning, all orchestrated by a lightweight coordinator. This is more efficient than requiring one massive model for everything. Foundry Local's native function calling makes this straightforward. The coordinator reliably invokes tools that instantiate specialists, with structured responses flowing back through proper tool messages. The model manages the coordination loop—deciding when it needs another specialist, when it has enough information, and when to provide a final answer. In our quiz application, the orchestrator routes user requests but never tries to be an expert in quiz generation, interface design, or tutoring. The QuizGeneratorAgent focuses solely on creating well-structured quiz JSON using constrained prompts and low temperature. The ReviewAgent handles open-ended educational dialogue with embedded quiz context and higher temperature for natural conversation. The tools abstract away file management, interface launching, and agent instantiation, the orchestrator just knows "this tool launches quizzes" without needing implementation details. This pattern scales effortlessly. If you wanted to add a new capability like study guides or flashcards, you could just easily create a new tool or specialists. The orchestrator gains these capabilities automatically by having the tool schemas you have defined without modifying core logic. This same pattern powers production systems with dozens of specialists handling retrieval, reasoning, execution, and monitoring, each excelling in its domain while the coordinator ensures seamless collaboration. Why This Matters The transition from text-parsing to native function calling enables a fundamentally different approach to building AI applications. With text parsing, you're constantly fighting against the unpredictability of natural language output. A model might decide to explain why it's calling a function before outputting the JSON, or it might format the JSON slightly differently than your regex expects, or it might wrap it in markdown code fences. Native function calling eliminates this entire class of problems. The model is trained to output tool calls as structured data, separate from its conversational responses. The multi-agent aspect builds on this foundation. Because function calling is reliable, you can confidently delegate to specialist agents knowing they'll integrate smoothly with the orchestrator. You can chain tool calls—the orchestrator might generate a quiz, then immediately launch the interface to take it, based on a single user request like "Create and give me a quiz about machine learning." The model handles this orchestration intelligently because the tool results flow back as structured data it can reason about. Running everything locally through Foundry Local adds another dimension of value and I am genuinely excited about this (hopefully, the phi models get this functionality soon). You can experiment freely, iterate quickly, and deploy solutions that run entirely on your infrastructure. For educational applications like our quiz system, this means students can interact with the AI tutor as much as they need without cost concerns. Getting Started With Your Own Multi-Agent System The complete code for this quiz application is available in the GitHub repository, and I encourage you to clone it and experiment. Try modifying the tool schemas to see how the orchestrator's behavior changes. Add a new specialist agent for a different task. Adjust the system prompts to see how agent personalities and capabilities shift. Think about the problems you're trying to solve. Could they benefit from having different specialists handling different aspects? A customer service system might have agents for order lookup, refund processing, and product recommendations. A research assistant might have agents for web search, document summarization, and citation formatting. A coding assistant might have agents for code generation, testing, and documentation. Start small, perhaps with two or three specialist agents for a specific domain. Watch how the orchestrator learns to route between them based on the tool descriptions you provide. You'll quickly see opportunities to add more specialists, refine the existing ones, and build increasingly sophisticated systems that leverage the unique strengths of each agent while presenting a unified, intelligent interface to your users. In the next entry, we will be deploying our quizz app which will mark the end of our journey in Foundry and SLMs these past few weeks. I hope you are as excited as I am! Thanks for reading.167Views0likes0CommentsUnlock key strategies for Marketplace growth
Looking to strengthen your go‑to‑market strategy in the year ahead? Don’t miss Microsoft’s latest guidance on how partners can accelerate success in the commercial marketplace. At Microsoft Ignite 2025, Microsoft Marketplace emerged as a central platform for cloud and AI innovation, with major enhancements such as expanded resale-enabled offers, deeper integration across Microsoft’s cloud ecosystem, and new tools designed to streamline procurement and accelerate value delivery for customers. This valuable resource distills 10 essential tips to help organizations optimize listings, leverage private offers, improve discoverability, and scale more effectively in the evolving AI-first landscape. It’s a valuable guide for any partner aiming to elevate their marketplace performance and align with Microsoft’s modern go-to-market approach. Read more: 10 Essential tips for Marketplace success: Insights from Microsoft Ignite 2025 | Microsoft Community Hub🚀 AI Toolkit for VS Code: January 2026 Update
Happy New Year! 🎆 We are kicking off 2026 with a major set of updates designed to streamline how you build, test, and deploy AI agents. This month, we’ve focused on aligning with the latest GitHub Copilot standards, introducing powerful new debugging tools, and enhancing our support for enterprise-grade models via Microsoft Foundry. 💡 From Copilot Instructions to Agent Skills The biggest architectural shift following the latest VS Code Copilot standards, in v0.28.1 is the transition from Copilot Instructions to Copilot Skills. This transition has equipped GitHub Copilot specialized skills on developing AI agents using Microsoft Foundry and Agent Framework in a cost-efficient way. In AI Toolkit, we have migrated our Copilot Tools from the Custom Instructions to Agent Skills. This change allows for a more capable integration within GitHub Copilot Chat. 🔄 Enhanced AIAgentExpert: Our custom agent now has a deeper understanding of workflow code generation and evaluation planning/execution. 🧹Automatic Migration: When you upgrade to v0.28.1, the toolkit will automatically clean up your old instructions to ensure a seamless transition to the new skills-based framework. 🏗️ Major Enhancements to Agent Development Our v0.28.0 milestone release brought significant improvements to how agents are authored and authenticated. 🔒 Anthropic & Entra Auth Support We’ve expanded the Agent Builder and Playground to support Anthropic models using Entra Auth types. This provides enterprise developers with a more secure way to leverage Claude models within the Agent Framework while maintaining strict authentication standards. 🏢 Foundry-First Development We are prioritizing the Microsoft Foundry ecosystem to provide a more robust development experience: Foundry v2: Code generation for agents now defaults to Foundry v2. ⚡ Eval Tool: You can now generate evaluation code directly within the toolkit to create and run evaluations in Microsoft Foundry. 📊 Model Catalog: We’ve optimized the Model Catalog to prioritize Foundry models and improved general loading performance. 🏎️ 💻 Performance and Local Models For developers building on Windows, we continue to optimize the local model experience: Profiling for Windows ML: Version 0.28.0 introduces profiling features for Windows ML-based local models, allowing you to monitor performance and resource utilization directly within VS Code. Platform Optimization: To keep the interface clean, we’ve removed the Windows AI API tab from the Model Catalog when running on Linux and macOS platforms. 🐛 Squashing Bugs & Polishing the Experience Codespaces Fix: Resolved a crash occurring when selecting images in the Playground while using GitHub Codespaces. Resource Management: Fixed a delay where newly added models wouldn't immediately appear in the "My Resources" view. Claude Compatibility: Fixed an issue where non-empty content was required for Claude models when used via the AI Toolkit in GitHub Copilot. 🚀 Getting Started Ready to experience the future of AI development? Here's how to get started: 📥 Download: Install the AI Toolkit from the Visual Studio Code Marketplace 📖 Learn: Explore our comprehensive AI Toolkit Documentation 🔍 Discover: Check out the complete changelog for v0.24.0 We'd love to hear from you! Whether it's a feature request, bug report, or feedback on your experience, join the conversation and contribute directly on our GitHub repository. Happy Coding! 💻✨Optimizing maintenance workflows with AI and Azure
In our Partner Spotlight series, we highlight organizations driving innovation across the Microsoft Marketplace. In each feature, we share the distinct journey of a partner leveraging the Microsoft ecosystem to deliver AI‑enabled solutions and transactable offers that streamline enterprise adoption and accelerate digital transformation. In this installment, we spoke with Benjamin Schwärzler of Workheld, a Vienna‑based SaaS company transforming maintenance management for asset‑intensive industries. We dive into their early beginnings, their growth as a Microsoft partner, and the ways they’re helping organizations close the gap between shopfloor execution and strategic decision‑making—powered by a secure, Azure‑based architecture. Read more to learn how Azure is shaping the future of intelligent maintenance management: AI-powered maintenance management with Microsoft Azure | Microsoft Community HubAPAC Fabric Engineering Connection
🚀 Upcoming Fabric Engineering Connection Call – Americas & EMEA & APAC! Join us on Wednesday, January 14, 8–9 am PT (Americas & EMEA) and Thursday, January 15, 1–2 am UTC (APAC) for a special session featuring the latest Power BI Updates & Announcements from Ignite with Sujata Narayana, Rui Romano, and other members of the Power BI Product Team. Plus, hear from Tom Peplow on Developing Apps on OneLake APIs. 🔗 To participate, make sure you’re a member of the Fabric Partner Community Teams Channel. If you haven’t joined yet, sign up here: https://lnkd.in/g_PRdfjt Don’t miss this opportunity to learn, connect, and stay up to date with the latest in Microsoft Fabric and Power BI!62Views0likes0Comments