github
80 TopicsStudy Buddy: Learning Data Science and Machine Learning with an AI Sidekick
If you've ever wished for a friendly companion to guide you through the world of data science and machine learning, you're not alone. As part of the "For Beginners" curriculum, I recently built a Study Buddy Agent, an AI-powered assistant designed to help learners explore data science interactively, intuitively, and joyfully. Why a Study Buddy? Learning something new can be overwhelming, especially when you're navigating complex topics like machine learning, statistics, or Python programming. The Study Buddy Agent is here to change that. It brings the curriculum to life by answering questions, offering explanations, and nudging learners toward deeper understanding, all in a conversational format. Think of it as your AI-powered lab partner: always available, never judgmental, and endlessly curious. Built with chatmodes, Powered by Purpose The agent lives inside a .chatmodes file in the https://github.com/microsoft/Data-Science-For-Beginners/blob/main/.github/chatmodes/study-mode.chatmode.md. This file defines how the agent behaves, what tone it uses, and how it interacts with learners. I designed it to be friendly, encouraging, and beginner-first—just like the curriculum itself. It’s not just about answering questions. The Study Buddy is trained to: Reinforce key concepts from the curriculum Offer hints and nudges when learners get stuck Encourage exploration and experimentation Celebrate progress and milestones What’s Under the Hood? The agent uses GitHub Copilot's chatmode, which allows developers to define custom behaviors for AI agents. By aligning the agent’s responses with the curriculum’s learning objectives, we ensure that learners stay on track while enjoying the flexibility of conversational learning. How You Can Use It YouTube Video here: Study Buddy - Data Science AI Sidekick Clone the repo: Head to the https://github.com/microsoft/Data-Science-For-Beginners and clone it locally or use Codespaces. Open the GitHub Copilot Chat, and select Study Buddy: This will activate the Study Buddy. Start chatting: Ask questions, explore topics, and let the agent guide you. What’s Next? This is just the beginning. I’m exploring ways to: Expand the agent to other beginner curriculums (Web Dev, AI, IoT) Integrate feedback loops so learners can shape the agent’s evolution Final Thoughts In my role, I believe learning should be inclusive, empowering, and fun. The Study Buddy Agent is a small step toward that vision, a way to make data science feel less like a mountain and more like a hike with a good friend. Try it out, share your feedback, and let’s keep building tools that make learning magical. Join us on Discord to share your feedback.Introducing the Microsoft Agent Framework
Introducing the Microsoft Agent Framework: A Unified Foundation for AI Agents and Workflows The landscape of AI development is evolving rapidly, and Microsoft is at the forefront with the release of the Microsoft Agent Framework an open-source SDK designed to empower developers to build intelligent, multi-agent systems with ease and precision. Whether you're working in .NET or Python, this framework offers a unified, extensible foundation that merges the best of Semantic Kernel and AutoGen, while introducing powerful new capabilities for agent orchestration and workflow design. Introducing Microsoft Agent Framework: The Open-Source Engine for Agentic AI Apps | Azure AI Foundry Blog Introducing Microsoft Agent Framework | Microsoft Azure Blog Why Another Agent Framework? Both Semantic Kernel and AutoGen have pioneered agentic development, Semantic Kernel with its enterprise-grade features and AutoGen with its research-driven abstractions. The Microsoft Agent Framework is the next generation of both, built by the same teams to unify their strengths: AutoGen’s simplicity in multi-agent orchestration. Semantic Kernel’s robustness in thread-based state management, telemetry, and type safety. New capabilities like graph-based workflows, checkpointing, and human-in-the-loop support This convergence means developers no longer have to choose between experimentation and production. The Agent Framework is designed to scale from single-agent prototypes to complex, enterprise-ready systems Core Capabilities AI Agents AI agents are autonomous entities powered by LLMs that can process user inputs, make decisions, call tools and MCP servers, and generate responses. They support providers like Azure OpenAI, OpenAI, and Azure AI, and can be enhanced with: Agent threads for state management. Context providers for memory. Middleware for action interception. MCP clients for tool integration Use cases include customer support, education, code generation, research assistance, and more—especially where tasks are dynamic and underspecified. Workflows Workflows are graph-based orchestrations that connect multiple agents and functions to perform complex, multi-step tasks. They support: Type-based routing Conditional logic Checkpointing Human-in-the-loop interactions Multi-agent orchestration patterns (sequential, concurrent, hand-off, Magentic) Workflows are ideal for structured, long-running processes that require reliability and modularity. Developer Experience The Agent Framework is designed to be intuitive and powerful: Installation: Python: pip install agent-framework .NET: dotnet add package Microsoft.Agents.AI Integration: Works with Foundry SDK, MCP SDK, A2A SDK, and M365 Copilot Agents Samples and Manifests: Explore declarative agent manifests and code samples Learning Resources: Microsoft Learn modules AI Agents for Beginners AI Show demos Azure AI Foundry Discord community Migration and Compatibility If you're currently using Semantic Kernel or AutoGen, migration guides are available to help you transition smoothly. The framework is designed to be backward-compatible where possible, and future updates will continue to support community contributions via the GitHub repository. Important Considerations The Agent Framework is in public preview. Feedback and issues are welcome on the GitHub repository. When integrating with third-party servers or agents, review data sharing practices and compliance boundaries carefully. The Microsoft Agent Framework marks a pivotal moment in AI development, bringing together research innovation and enterprise readiness into a single, open-source foundation. Whether you're building your first agent or orchestrating a fleet of them, this framework gives you the tools to do it safely, scalably, and intelligently. Ready to get started? Download the SDK, explore the documentation, and join the community shaping the future of AI agents.From Cloud to Chip: Building Smarter AI at the Edge with Windows AI PCs
As AI engineers, we’ve spent years optimizing models for the cloud, scaling inference, wrangling latency, and chasing compute across clusters. But the frontier is shifting. With the rise of Windows AI PCs and powerful local accelerators, the edge is no longer a constraint it’s now a canvas. Whether you're deploying vision models to industrial cameras, optimizing speech interfaces for offline assistants, or building privacy-preserving apps for healthcare, Edge AI is where real-world intelligence meets real-time performance. Why Edge AI, Why Now? Edge AI isn’t just about running models locally, it’s about rethinking the entire lifecycle: - Latency: Decisions in milliseconds, not round-trips to the cloud. - Privacy: Sensitive data stays on-device, enabling HIPAA/GDPR compliance. - Resilience: Offline-first apps that don’t break when the network does. - Cost: Reduced cloud compute and bandwidth overhead. With Windows AI PCs powered by Intel and Qualcomm NPUs and tools like ONNX Runtime, DirectML, and Olive, developers can now optimize and deploy models with unprecedented efficiency. What You’ll Learn in Edge AI for Beginners The Edge AI for Beginners curriculum is a hands-on, open-source guide designed for engineers ready to move from theory to deployment. Multi-Language Support This content is available in over 48 languages, so you can read and study in your native language. What You'll Master This course takes you from fundamental concepts to production-ready implementations, covering: Small Language Models (SLMs) optimized for edge deployment Hardware-aware optimization across diverse platforms Real-time inference with privacy-preserving capabilities Production deployment strategies for enterprise applications Why EdgeAI Matters Edge AI represents a paradigm shift that addresses critical modern challenges: Privacy & Security: Process sensitive data locally without cloud exposure Real-time Performance: Eliminate network latency for time-critical applications Cost Efficiency: Reduce bandwidth and cloud computing expenses Resilient Operations: Maintain functionality during network outages Regulatory Compliance: Meet data sovereignty requirements Edge AI Edge AI refers to running AI algorithms and language models locally on hardware, close to where data is generated without relying on cloud resources for inference. It reduces latency, enhances privacy, and enables real-time decision-making. Core Principles: On-device inference: AI models run on edge devices (phones, routers, microcontrollers, industrial PCs) Offline capability: Functions without persistent internet connectivity Low latency: Immediate responses suited for real-time systems Data sovereignty: Keeps sensitive data local, improving security and compliance Small Language Models (SLMs) SLMs like Phi-4, Mistral-7B, Qwen and Gemma are optimized versions of larger LLMs, trained or distilled for: Reduced memory footprint: Efficient use of limited edge device memory Lower compute demand: Optimized for CPU and edge GPU performance Faster startup times: Quick initialization for responsive applications They unlock powerful NLP capabilities while meeting the constraints of: Embedded systems: IoT devices and industrial controllers Mobile devices: Smartphones and tablets with offline capabilities IoT Devices: Sensors and smart devices with limited resources Edge servers: Local processing units with limited GPU resources Personal Computers: Desktop and laptop deployment scenarios Course Modules & Navigation Course duration. 10 hours of content Module Topic Focus Area Key Content Level Duration 📖 00 Introduction to EdgeAI Foundation & Context EdgeAI Overview • Industry Applications • SLM Introduction • Learning Objectives Beginner 1-2 hrs 📚 01 EdgeAI Fundamentals Cloud vs Edge AI comparison EdgeAI Fundamentals • Real World Case Studies • Implementation Guide • Edge Deployment Beginner 3-4 hrs 🧠 02 SLM Model Foundations Model families & architecture Phi Family • Qwen Family • Gemma Family • BitNET • μModel • Phi-Silica Beginner 4-5 hrs 🚀 03 SLM Deployment Practice Local & cloud deployment Advanced Learning • Local Environment • Cloud Deployment Intermediate 4-5 hrs ⚙️ 04 Model Optimization Toolkit Cross-platform optimization Introduction • Llama.cpp • Microsoft Olive • OpenVINO • Apple MLX • Workflow Synthesis Intermediate 5-6 hrs 🔧 05 SLMOps Production Production operations SLMOps Introduction • Model Distillation • Fine-tuning • Production Deployment Advanced 5-6 hrs 🤖 06 AI Agents & Function Calling Agent frameworks & MCP Agent Introduction • Function Calling • Model Context Protocol Advanced 4-5 hrs 💻 07 Platform Implementation Cross-platform samples AI Toolkit • Foundry Local • Windows Development Advanced 3-4 hrs 🏭 08 Foundry Local Toolkit Production-ready samples Sample applications (see details below) Expert 8-10 hrs Each module includes Jupyter notebooks, code samples, and deployment walkthroughs, perfect for engineers who learn by doing. Developer Highlights - 🔧 Olive: Microsoft's optimization toolchain for quantization, pruning, and acceleration. - 🧩 ONNX Runtime: Cross-platform inference engine with support for CPU, GPU, and NPU. - 🎮 DirectML: GPU-accelerated ML API for Windows, ideal for gaming and real-time apps. - 🖥️ Windows AI PCs: Devices with built-in NPUs for low-power, high-performance inference. Local AI: Beyond the Edge Local AI isn’t just about inference, it’s about autonomy. Imagine agents that: - Learn from local context - Adapt to user behavior - Respect privacy by design With tools like Agent Framework, Azure AI Foundry and Windows Copilot Studio, and Foundry Local developers can orchestrate local agents that blend LLMs, sensors, and user preferences, all without cloud dependency. Try It Yourself Ready to get started? Clone the Edge AI for Beginners GitHub repo, run the notebooks, and deploy your first model to a Windows AI PC or IoT devices Whether you're building smart kiosks, offline assistants, or industrial monitors, this curriculum gives you the scaffolding to go from prototype to production.Essential Microsoft Resources for MVPs & the Tech Community from the AI Tour
Unlock the power of Microsoft AI with redeliverable technical presentations, hands-on workshops, and open-source curriculum from the Microsoft AI Tour! Whether you’re a Microsoft MVP, Developer, or IT Professional, these expertly crafted resources empower you to teach, train, and lead AI adoption in your community. Explore top breakout sessions covering GitHub Copilot, Azure AI, Generative AI, and security best practices—designed to simplify AI integration and accelerate digital transformation. Dive into interactive workshops that provide real-world applications of AI technologies. Take it a step further with Microsoft’s Open-Source AI Curriculum, offering beginner-friendly courses on AI, Machine Learning, Data Science, Cybersecurity, and GitHub Copilot—perfect for upskilling teams and fostering innovation. Don’t just learn—lead. Access these resources, host impactful training sessions, and drive AI adoption in your organization. Start sharing today! Explore now: Microsoft AI Tour Resources.How to Master GitHub Copilot: Build, Prompt, Deploy Smarter
Mastering GitHub Copilot: Build, Prompt, Deploy Smarter is a free, hands-on workshop designed to help developers go beyond autocomplete and unlock the true power of AI-assisted coding. Instead of toy examples, this course walks you through real-world software engineering challenges: messy codebases, multi-language projects, cloud deployments, and legacy system upgrades. You’ll learn practical skills like prompt engineering, advanced Copilot features, and AI pair programming techniques that make you faster, sharper, and more creative. Whether you’re a junior developer or a seasoned architect, mastering GitHub Copilot will help you: Reduce cognitive load and focus on system design Accelerate onboarding for new engineers Write cleaner, more consistent code Automate repetitive tasks to free up time for innovation AI coding tools like GitHub Copilot are no longer optional—they’re essential. This workshop gives you the skills to collaborate with Copilot effectively and stay competitive in the age of AI-powered development.Use Copilot and MCP to query Microsoft Learn Docs
Are you ready to take your Azure development workflow to the next level? In this post, we’ll walk through how to use GitHub Copilot in Agent Mode—paired with MCP (Model Context Protocol) servers—to get trusted, grounded answers from Microsoft Learn Docs, right inside your coding workspace. Whether you’re tired of switching tabs to search documentation or want to ensure your AI assistant’s answers are always accurate, this guide will show you how to streamline your workflow and boost your productivity.Reimagining Telco with Microsoft: AI, TM Forum ODA, and Developer Innovation
The telecom industry is undergoing a seismic shift—driven by AI, open digital architectures, and the urgent need for scalable, customer-centric innovation. At the heart of this transformation is TM Forum Innovate Americas 2025, a flagship event bringing together global leaders to reimagine the future of connectivity. Microsoft’s presence at this year’s event is both strategic and visionary. As a key partner in the telecom ecosystem, Microsoft is showcasing how its technologies—spanning AI, cloud, and developer tools—are enabling Communication Service Providers (CSPs) to modernize operations, accelerate innovation, and deliver exceptional customer experiences. 🔑 Key Themes Shaping the Conversation Connected Intelligence: Microsoft is championing a new model of collaboration—one where AI systems, teams, and technologies work together seamlessly to solve real-world problems. This approach breaks down silos and enables intelligent decision-making across the enterprise. AI-First Mindset: From network optimization to customer service, Microsoft is helping telcos embed AI into the fabric of their operations. The focus is on building shared data platforms, connected models, and orchestration frameworks that scale. Customer Experience & Efficiency: With rising expectations and increasing complexity, CSPs must deliver faster, smarter, and more personalized services. Microsoft’s solutions are designed to enhance agility, reduce friction, and elevate the end-user experience. As the event unfolds, Microsoft’s sessions and showcases will highlight how these themes come to life—through real-world implementations, collaborative frameworks, and developer-first tools. Thought Leadership & Sessions At TM Forum Innovate Americas 2025, Microsoft is not just showcasing technology—it’s sharing a bold vision for the future of telecom. Through a series of thought-provoking sessions led by industry experts, Microsoft is demonstrating how AI, open standards, and developer tools can converge to drive meaningful transformation across the telco ecosystem. From enabling intelligent collaboration through the Azure AI Foundry, to operationalizing AI and Open Digital Architecture (ODA) for autonomous networks, and empowering developers with GitHub Copilot, Microsoft’s contributions reflect a deep commitment to innovation, scalability, and interoperability. Each session offers a unique lens into how Microsoft is helping Communication Service Providers (CSPs) modernize their IT stacks, accelerate development, and deliver exceptional customer experiences. Microsoft Thought Leadership Sessions CASE STUDY: Connected Intelligence: multiplying AI value across the enterprise 📅Sep 10 1:30pm CDT Peter Huang, Senior Director, Technology, Network Data and AI T-Mobile Andres Gil, Industry Advisor/Business Developer, Telco, Media and Gaming Industry Microsoft CASE STUDY: From hype to impact: operationalizing AI in telco with TM Forum’s ODA and Open APIs 📅Sep 11 1:30pm CDT Puja Athale, Director - Telco Global Azure AI Lead Microsoft Connected Intelligence & Azure AI Foundry: Scaling AI Across the Telco Enterprise T-Mobile and Microsoft are spotlighting a transformative approach to enterprise AI: Connected Intelligence. The joint session explores how telcos can break down silos and unlock the full potential of AI by enabling strategic collaboration across systems, teams, and technologies. The core challenge they address is clear: AI in isolation cannot answer even the simplest customer questions. Whether it's billing, device performance, or network coverage, fragmented systems lead to blind spots, duplication, and poor customer outcomes. To overcome this, they propose a unified framework that blends technology and culture—because tech alone doesn’t scale, and culture alone doesn’t transform. Azure AI Foundry: The Engine Behind Connected Intelligence At the heart of this vision is Microsoft’s Azure AI Foundry, a shared AI platform designed to scale intelligence across the enterprise and a core component of Microsoft’s recently announced Network Operations Agent Framework. Connected Intelligence integrates: Agent Frameworks and Agent Catalogs for modular AI deployment Hundreds of TBs of daily data from network switches, device logs, and location records Enterprise-grade orchestration and data governance AI/ML models aligned with customer-level time series events This architecture enables reuse, speed, and alignment across people, organizations, and systems—turning data into actionable intelligence. Model Context Protocol (MCP): AI-to-AI Collaboration A standout innovation is the Model Context Protocol (MCP), which goes beyond traditional APIs. While APIs connect systems through data, MCP connects intelligence through context. It allows AI agents to dynamically discover and chain APIs without custom coding, enabling real-time collaboration across network operations, device management, and deployment workflows. By integrating MCP into the API fabric, Microsoft is laying the groundwork for agentic AI—where intelligent systems can autonomously interact, adapt, and scale across the telco ecosystem. From Hype to Impact: Operationalizing AI in Telco with TM Forum’s ODA and Open APIs The telecom industry is moving from hype to impact by operationalizing AI through TM Forum’s Open Digital Architecture (ODA) and Open APIs. The session, From hype to impact: operationalizing AI in telco with TM Forum’s ODA and Open APIs, explores how telcos can build AI-ready architectures, unlock data value for automation and AI agents, and scale responsibly with governance and ethics at the core. Microsoft’s collaboration with TM Forum is enabling telcos to modernize OSS/BSS systems using the ODA Canvas—a modular, cloud-native execution environment orchestrated with AI and powered by Microsoft Azure. This architecture supports plug-and-play integration of differentiated services, reduces integration costs by over 30%, and boosts developer productivity by more than 40% with GitHub Copilot. Learn how leading telcos like Telstra are scaling AI solutions such as “One Sentence Summary” and “Ask Telstra” across their contact centers and retail teams. These solutions, built on Azure AI Foundry, have delivered measurable impact: 90% of employees reported time savings and increased effectiveness, with a 20% reduction in follow-up contacts. Telstra’s success is underpinned by a modernized data ecosystem and strong governance frameworks that ensure ethical and secure AI deployment. From Chaos to Clarity with Observability Despite advances in operational tooling, fragmented observability remains a persistent challenge. Vendors often capture telemetry in incompatible formats, forcing operations teams to rely on improvised log aggregators and custom parsers that drive up costs and hinder rapid incident resolution. Microsoft’s latest contribution to the Open Digital Architecture (ODA) initiative directly tackles this issue with the ODA Observability Operator, now available as open source on GitHub. By enforcing a standardized logging contract, integrating seamlessly with Azure Monitor, and surfacing health metrics through TM Forum nonfunctional APIs, the operator streamlines telemetry across systems. Early trials have shown promising results—carriers significantly reduced the time needed to detect billing anomalies, enabling teams to shift from reactive troubleshooting to proactive optimization. Accelerating TM Forum Open API Development with GitHub Copilot As the telecom industry embraces open standards and modular architectures, Microsoft is empowering developers to move faster and smarter with GitHub Copilot—an AI-powered coding assistant that’s transforming how TM Forum (TMF) Open APIs are built and deployed. Why GitHub Copilot for TM Forum Open APIs? TMF Open APIs are a cornerstone of interoperability in telecom, offering over 100 standardized RESTful interfaces across domains like customer management, product catalog, and billing. But implementing these APIs can be time-consuming and repetitive. GitHub Copilot streamlines this process by: Autocompleting boilerplate code for TMF endpoints Suggesting API handlers and data models aligned with TMF specs Generating test plans and documentation Acting as an AI pair programmer that understands your code context This means developers can focus on business logic while Copilot handles the heavy lifting. Real-World Uses Telco developers benefit from powerful features in GitHub Copilot that streamline the development of TMF Open API services. One such feature is Agent Mode, which automates complex, multi-step tasks such as implementing TMF API flows, running tests, and correcting errors—saving developers significant time and effort. Another key capability is Copilot Chat, which provides conversational support directly within the IDE, helping developers debug code, validate against TMF specifications, and follow best practices with ease. Together, these tools enhance productivity and reduce friction in building compliant, scalable telecom solutions. For example, when building a Customer Management microservice using the TMF629 API, Copilot can suggest endpoint handlers, validate field names against the spec, and even help write README documentation or unit tests. 📈 Proven Productivity Gains CSPs like Proximus have reported significant productivity improvements using GitHub Copilot in their Network IT functions: 20–30% faster code writing 25–35% faster refactoring 80–90% improvement in documentation 40–50% gains in code compliance Other telcos like Vodafone, NOS, Orange, TELUS, and Lumen Technologies are also leveraging Copilot to accelerate innovation and reduce development friction. Best Practices for TMF API Projects To get the most out of Copilot: Use it for repetitive tasks and pattern recognition Always validate generated code against TMF specs Keep relevant spec files open to improve suggestion accuracy Use Copilot Chat for guidance on security, error handling, and optimization GitHub Copilot is more than a coding assistant—it’s a catalyst for telco transformation. By combining AI with TMF’s open standards, Microsoft is helping developers build faster, smarter, and more consistently across the telecom ecosystem. Learn more about how to configure and use GitHub Copilot in your own TMF Open API projects in our latest tech community blog. Microsoft’s Broader Vision for Telco Transformation Microsoft’s contributions reflect a comprehensive strategy to reshape the telecom landscape through scalable intelligence, open collaboration, and developer empowerment. At the core of Microsoft’s vision is the idea that AI must be connected, contextual, and reusable. The Azure AI Foundry and Model Context Protocol (MCP) exemplify this approach by enabling telcos to: Harness massive volumes of time-series data from networks, devices, and customer interactions Deploy modular AI agents that can collaborate across systems Orchestrate workflows that adapt in real time to changing conditions This architecture transforms fragmented data into actionable insights, allowing CSPs to move from reactive operations to proactive intelligence. Conclusion: Microsoft’s Strategic Alignment with TM Forum Microsoft’s participation at TM Forum Innovate Americas 2025 reflects a deep commitment to transforming the telecom industry through AI-first innovation, open collaboration, and developer empowerment. From T-Mobile’s vision for Connected Intelligence, to Microsoft’s roadmap for operationalizing AI and ODA, and the developer-centric acceleration enabled by GitHub Copilot, Microsoft is helping Communication Service Providers (CSPs) move faster, scale smarter, and deliver better customer experiences. By aligning with TM Forum’s goals—standardization, interoperability, and autonomous operations—Microsoft is not just participating in the conversation; it’s helping lead it. 📣 Call to Action Join Microsoft and other industry leaders at TM Forum Innovate Americas 2025 to explore the future of telco transformation. Whether you're a strategist, technologist, or developer, this is your opportunity to connect, learn, and shape what’s next.290Views2likes0CommentsSupercharge Your TM Forum Open API Development with GitHub Copilot
Developing applications that implement TM Forum (TMF) Open APIs can be greatly accelerated with the help of GitHub Copilot, an AI-based coding assistant. By combining Copilot’s code-generation capabilities with TMF’s standardized API specifications, developers can speed up coding while adhering to industry standards. In this blog post, we’ll walk through how to set up a project with GitHub Copilot to write TMF Open API-based applications, including prerequisites, configuration steps, an example workflow for building an API, best practices, and additional tips. Introduction: GitHub Copilot and TM Forum Open APIs GitHub Copilot is an AI-powered coding assistant developed by GitHub and OpenAI. It integrates with popular editors (VS Code, Visual Studio, JetBrains IDEs, etc.) and uses advanced language models to autocomplete code and even generate entire functions based on context and natural language prompts. For example, Copilot can turn a comment like “// fetch customer by ID” into a code snippet that implements that logic. It was first introduced in 2021 and is available via subscription for developers and enterprises. Copilot has the ability to interpret the code and comments in your current file and suggest code that fits, essentially acting as an AI pair programmer. TMF Open APIs refers to a set of standardized REST APIs for telecom and digital service providers. The APIs are designed to enable seamless connectivity and interoperability across complex service ecosystems. In practice, the TMF Open API program has defined over 100 RESTful interface specifications covering various domains (such as customer management, product catalog, billing, etc.). These APIs share a common design guideline (TMF630) and data model, ensuring that services can be managed end-to-end in a consistent way. Why use GitHub Copilot for TMF Open API development? Integrating Copilot with TMF Open API streamlines telecom app development. Copilot helps generate boilerplate code, suggests API handling snippets, and provides usage examples, all in line with TMF specs. For developers building services like Customer Management or Product Catalog, Copilot autocompletes endpoints, models, and business logic based on learned standards, maintaining TMF consistency. Developers review and edit outputs, but Copilot eases repetitive tasks. The following sections will guide you on setup and practical use with TMF Open API. "With GitHub Copilot, TM Forum members can accelerate API development — reducing boilerplate coding, improving consistency with our Open API standards, and freeing developers to focus on innovation rather than routine tasks. We’d love to hear from members already experimenting with Copilot — your experiences, lessons, and best practices will help shape how we embed AI-assisted coding into the wider TM Forum Open API community." - Ian Holloway, Chief Architect, TM Forum Prerequisites for Setting Up the Project Before configuring GitHub Copilot in your project, make sure you have the following prerequisites in place: GitHub Copilot Access: You will need an active GitHub Copilot subscription or trial linked to your GitHub account. Copilot is a paid service (with a free trial for new users), so ensure your account is signed up for Copilot access. If you haven’t done this, go to the https://github.com/features/copilot and activate your subscription or trial. Supported IDE or Code Editor: Copilot works with several development environments. For the best experience, use a supported editor such as Visual Studio Code, Visual Studio 2022, Neovim, or JetBrains IDEs (like IntelliJ, PyCharm, etc) GitHub Account: Obviously, you need a GitHub account to use Copilot (since you must sign in to authorize the Copilot plugin). Ensure you have your GitHub credentials handy. Programming Language Environment: Set up the programming language/framework you plan to use for your TMF Open API application. Copilot supports a wide range of languages, including JavaScript/TypeScript, Python, Java, C#, etc., so choose one that suits your project. TMF Open API Specification: Obtain the TMF Open API specifications or documentation for the APIs you plan to implement. TM Forum provides downloadable Open API (Swagger) specs for each API (for example, the Customer Management API, Product Catalog API, etc.). Basic Domain Knowledge: While not strictly required, it helps to have a basic understanding of the TMF Open API domain you're working with. For example, know what “Customer Management API” or “Product Catalog API” is supposed to do at a high level (reading the TMF user guide can help). This will make it easier to prompt Copilot effectively and to validate its suggestions. For more training, please refer to the TM Forum Education Programs. With these prerequisites met, you’re ready to configure GitHub Copilot in your development environment and integrate it into your project workflow. Step-by-Step Guide: Configuring GitHub Copilot in Your IDE Setting up GitHub Copilot for your project is a one-time process. Here is a step-by-step guide using Visual Studio Code as the example IDE: Step 1: Install the GitHub Copilot Extension. Open Visual Studio Code and navigate to the Extensions view (you can click the Extensions icon on the left toolbar or press Ctrl+Shift+X on Windows / Cmd+Shift+X on Mac). In the Extensions marketplace search bar, type “GitHub Copilot”. You should see the GitHub Copilot extension by GitHub. Click Install to add it to VS Code. This will download and enable the Copilot plugin in your editor. Step 2: Authenticate with GitHub. After installation, Copilot will prompt you to sign in to GitHub to authorize the extension. Click “Sign in with GitHub”. Log in with your GitHub credentials and grant permission to the Copilot extension. Step 3: Enable Copilot in your Workspace/Project. Now that Copilot is installed and linked to your account, you should ensure it’s enabled for your current project. In VS Code, open the command palette (Ctrl+Shift+P / Cmd+Shift+P) and type “Copilot”. Look for a command like “GitHub Copilot: Enable/Disable”. Make sure it’s enabled (it should be by default after installation). At this point, GitHub Copilot is fully configured in your development environment. The next step is to actually use it in developing a TMF Open API application. We will now walk through writing code with Copilot’s assistance, focusing on a TMF Open API use case. Writing TMF Open API Apps Using GitHub Copilot Now for the fun part – using GitHub Copilot to help write an application that implements a TMF Open API. In this section, we’ll provide a step-by-step example of how you might develop a simple service using a TMF Open API (say, a Customer Management API) with Copilot’s assistance. The principles can be applied to any TMF API or indeed any standard API. Scenario: Let’s assume we want to build a minimal Customer Management microservice that conforms to the TMF629 Customer Management API (version 5.0) – which manages customer records. We will implement a simple endpoint to retrieve customer information by ID, as defined in the TMF API spec. We’ll use Node.js with an Express framework for this example, but you could choose Python (FastAPI/Flask) or Java (Spring Boot) similarly. The emphasis is on how Copilot assists with the coding. Step 1: Referring to TMF Open API GitHub API specifications Before coding, ensure you have the TMF629 API specification open or accessible for reference. For example, the spec might say there’s a GET operation at /tmf-api/customerManagement/v5/customer/{id} for retrieving a customer, and defines a Customer data model. If you have the YAML/JSON file, open it in a VS Code tab – this provides Copilot with a bunch of context (resource paths, field names, etc.). Copilot can use this textual context to inform its suggestions. The spec files can be downloaded from below link (needs a TM Forum registration and login): Customer Management API REST API v5.0 Open API Directory (Link for all API specifications) Step 2: Set up the project scaffolding. Initialize a new Node.js project (e.g., run npm init -y for a Node project, and install Express by running npm install express). Then create a file index.js (or app.js). In that file, start with the basic Express server setup: const express = require('express'); const app = express(); app.use(express.json()); // Start server on port 3000 app.listen(3000, () => { console.log('TMF Customer API service is running on port 3000'); }); As you type the above, Copilot may autocomplete parts of it. For instance, after writing app.listen(3000, () => {, you might see it suggest a console.log line. It’s standard boilerplate, so nothing magical yet, but it confirms Copilot is active. Step 3: Implement an API endpoint using Copilot. Consider the TMF629 Customer Management API Customer Management API TMF629-v5.0 Now, according to the TMF specification, the GET Customer by ID endpoint should be something like: GET https://host:port/tmf-api/customerManagement/v5/customer/{customerId} -> returns customer details. Let’s write a handler for this. Start typing the Express route definition. For example: // GET customer by ID app.get('/tmf-api/customerManagement/v5/customer/:id', (req, res) => { // }); The moment you write the path string and arrow function, Copilot is likely to recognize this as a request handler and may suggest code inside. It has context from the route path (which is quite specific and likely uncommon except from the TMF spec) and the comment. Copilot might suggest something like: fetching the customer by ID from a database or returning a placeholder. Since we haven’t defined a database in this simple scenario, let’s see what it does. Often, for a new route, Copilot might guess you want to send a response. It could for example suggest: // ... inside the handler: const customerId = req.params.id; // TODO: fetch customer from database (this is a Copilot suggestion comment) res.status(200).json({ id: customerId, name: "Sample Customer" }); }); Of course, this is just an example of what Copilot might do. In practice Copilot may complete the code differently. The key is that Copilot can help stub out the logic. If it doesn’t automatically fill it, you can nudge it by writing a comment or function description inside the handler, such as: // Find customer by ID and return as JSON After writing that comment, pause and see if Copilot suggests a code block that finds a customer. If we had more context (like a Customer array or database connector imported), it might try to use it. For now, you can accept a basic implementation (like returning a dummy object as above). Accepting the suggestion, our route becomes: // GET customer by ID app.get('/tmf-api/customerManagement/v5/customer/:id', (req, res) => { const customerId = req.params.id; // For demo, return a dummy customer object res.json({ id: customerId, name: "John Doe", status: "ACTIVE" }); }); Here we assumed Copilot suggested returning an object with some fields. If the TMF spec defines fields for a Customer (e.g., name, status), and especially if the spec file is open in another tab, Copilot might use actual field names from the spec in its suggestion because it “saw” them in the YAML. This is a huge win: it helps ensure your code uses correct field names and structure as per the standard. For instance, if the spec says a Customer resource has id, name, status, Copilot might include those. Always verify against the spec, but it often aligns. You continue this way for other operations (PUT/PATCH to update a customer, etc.), each time leveraging Copilot to write the initial code which you then adjust. Copilot can also help with non-HTTP logic: for example, if you need a function to validate an email address, just write the function signature and a comment, and it will likely fill it in (because such patterns are common in its training). Step 5: Use Copilot for documentation and examples. Copilot can even assist in writing documentation-like content or tests for your API. For instance, you could create a README.md for your project. Step 6: Iterate and refine with Copilot Chat (if available). GitHub Copilot includes a Chat mode (Copilot Chat) in VS Code, which acts like an assistant you can converse with in natural language. If you have Copilot Chat enabled, you can ask it things like “How do I implement pagination in this API according to TMF guidelines?” or “Suggest improvements for error handling in my code”. The chat can analyze your code base and provide guidance or even write code snippets to apply. GitHub Copilot provides the capability to choose your own model (e.g. GPT-4.1, GPT-4o, GPT-5 or Claude 3.5 Sonnet, etc.). This provides additional flexibility to Telco developers building solutions on TM Forum (TMF) Open APIs. This flexibility means developers aren’t limited to one generic AI assistant – they can select the model best suited to each coding task, whether for rapid code suggestions or complex problem-solving. Step 7: Test and validate against the TMF spec. Once you have your endpoints coded with Copilot’s help, it’s crucial to test them against the TMF specification to ensure correctness. Use tools like Postman or curl to call your API endpoints. For instance, GET http://localhost:3000/tmf-api/customerManagement/v5/customer/123 should return either a dummy customer (if using in-memory data as above) or a 404 if not found, as per spec expectations. Compare response structures to the TMF API definition. If something is missing or named incorrectly (say Copilot used customerName but spec expects name), adjust your code accordingly. Copilot is not guaranteed to produce 100% correct or updated spec implementations – it provides a helpful draft, but you are responsible for aligning it exactly with TMF’s definitions. During testing, you might encounter bugs or mismatches. This is another point where Copilot can assist: if you get an error or exception, you can paste it into Copilot Chat or as a comment and prompt Copilot to help fix it. For example, if you see your server crashes on a null reference, you can write a comment // Copilot: fix null reference in customer lookup near the code, and it might suggest a null-check. Best Practices and Tips for Using Copilot with TMF Open APIs To use GitHub Copilot efficiently for TMF Open API development, follow these key practices: Apply Copilot for Repetitive Tasks: When implementing endpoints with similar logic (e.g., CRUD operations), use an initial example as a template. Copilot will recognise patterns and help adapt code for new entities. Prompt Clearly and Iterate: Refine prompts to get better suggestions; add specifics in comments for improved results. If output isn't right, adjust your instructions for more detail. Verify Against TMF Standards: Copilot's knowledge may not reflect the latest TMF specs. Double-check generated code against official documentation and provide context from newer specs when necessary. Incorporate Security and Quality Checks: Always validate Copilot’s code for security and proper input handling. Use Copilot Chat for advice on improving validation and ensure you meet industry standards (e.g., OAuth). Learn From Suggestions: Use Copilot to expand your skills, especially if you're new to a language or framework, but confirm that its examples suit your use case. Don’t Over Rely on Automation: Copilot is best for boilerplate and common patterns; customise business logic and architecture-specific code yourself. Keep Relevant Files Open: Copilot works best with focused context. Close unrelated files to improve suggestion quality. Update Copilot Regularly: Keep your extension up-to-date and try different AI models for improved performance. Following these principles will help make Copilot a productive partner in TMF Open API projects, offering speed while maintaining adherence to standards. CSPs Leveraging GitHub Copilot Multiple Telco customers across the globe have adopted GitHub Copilot and have achieved a significant boost in their developer productivity. In particular, Proximus has achieved below productivity benefits by adopting GitHub Copilot in their Network IT function. Code Test Write Code Refactor Code Documentation Code Review Code Compliance Unit Test ↑20-30% ↑25-35% ↑80 - 90% ↑5-10% ↑40 – 50% ↑20-30% More details here: (2) Transforming Telecommunications with Generative AI: Proximus and TCS's GitHub Copilot Journey | LinkedIn Other Telco Customer Stories NOS empowers developer collaboration and innovation on GitHub | Microsoft Customer Stories Orange: creating value for its lines of businesses in the age of generative AI with Azure OpenAI Service and GitHub Copilot | Microsoft Customer Stories With GitHub, Canadian company TELUS aims to bring ‘focus, flow and joy’ to developers - Source https://github.com/customer-stories/telus Lumen Technologies accelerates dev productivity, sees financial gains with GitHub Copilot, Azure DevOps, and Visual Studio | Microsoft Customer Stories Vodafone What's Next? Agent mode to autonomously complete tasks Telco developers can boost productivity with GitHub Copilot’s Agent Mode, which acts as an autonomous coding partner. Agent Mode handles multi-step coding tasks—such as implementing TMF Open API flows—reducing manual effort and speeding up feature delivery. It automates complex processes like file selection, testing, and error correction, allowing developers to concentrate on higher-level design while routine tasks run in the background. Write and execute test plans GitHub Copilot Chat can quickly generate test plans. Acting as an AI pair-tester, Copilot produces unit tests from your existing code or specs. Telco developers can highlight a method, request test generation, and instantly receive comprehensive test suggestions for different scenarios. Conclusion Setting up GitHub Copilot for TMF Open API projects streamlines productivity. This blog covered Copilot’s setup, its application to TMF-compliant services, and provided best practices like offering context and reviewing AI-generated code. Copilot speeds up development by handling boilerplate and suggesting standard patterns so you can focus on business logic. It fits seamlessly into your workflow, producing helpful suggestions when guided with clear specs and prompts. Developers report saving time and reducing complexity. Still, Copilot shouldn’t replace understanding TMF APIs or good engineering habits; always verify code accuracy. Combining your expertise with Copilot’s capabilities leads to efficient, high-quality implementations. Explore features like Copilot CLI and keep up-to-date via TM Forum resources, including the Open API Table and community forums. With the right setup and practices, you’re ready to develop robust TMF Open API apps, leveraging AI for faster results.One MCP Server, Two Transports: STDIO and HTTP
Let's think about a situation for using MCP servers. Most MCP servers run on a local machine – either directly or in a container. But with other integration scenarios like using Copilot Studio, enterprise-wide MCP servers or need more secure environments, those MCP server should run remotely through HTTP. As long as the core logic lives in a shared layer, wrapping it in a console (STDIO) or web (HTTP) host is straightforward. However, maintaining two hosts can duplicate code. What if a single MCP server supports both STDIO and HTTP, controlled by a simple switch? It will be reducing significant amount of management overhead. This post shows how to build a single MCP server that supports both transports, selected at runtime with a --http switch, using the .NET builder pattern. .NET Builder Pattern A .NET console app starts the builder pattern using Host.CreateApplicationBuilder(args) . var builder = Host.CreateApplicationBuilder(args); The builder instance is the type of HostApplicationBuilder implementing the IHostApplicationBuilder interface. On the other hand, an ASP.NET web app starts the builder pattern using WebApplication.CreateBuilder(args) . var builder = WebApplication.CreateBuilder(args); This builder instance is the type of WebApplicationBuilder also implementing the IHostApplicationBuilder interface. Now, both builder instances have IHostApplicationBuilder in common, and this is the key of this post today. If we decide the hosting mode before creating the builder instance, the server can run as either STDIO or HTTP. The --http Switch as an Argument As you can see, both Host.CreateApplicationBuilder(args) and WebApplication.CreateBuilder(args) take the list of arguments that are passed from the command-line. Therefore, before initializing the builder instance, we can identify the server type. Let's use a --http switch as the selector. Then pass --http when running the server. dotnet run --project MyMcpServer -- --http Then, before creating the builder instance, check whether the switch is present. It looks for the environment variables first, then checks the arguments passed. public static bool UseStreamableHttp(IDictionary env, string[] args) { var useHttp = env.Contains("UseHttp") && bool.TryParse(env["UseHttp"]?.ToString()?.ToLowerInvariant(), out var result) && result; if (args.Length == 0) { return useHttp; } useHttp = args.Contains("--http", StringComparer.InvariantCultureIgnoreCase); return useHttp; } Here's the usage: var useStreamableHttp = UseStreamableHttp(Environment.GetEnvironmentVariables(), args); We've identified whether to use HTTP or not. Therefore, the builder instance is built in this way: IHostApplicationBuilder builder = useStreamableHttp ? WebApplication.CreateBuilder(args) : Host.CreateApplicationBuilder(args); With this builder instance, we can add more dependencies specific to web app or console app depending on the scenario. The Transport Type Let's add the MCP server to the builder instance. var mcpServerBuilder = builder.Services.AddMcpServer() .WithPromptsFromAssembly() .WithResourcesFromAssembly() .WithToolsFromAssembly(); We haven’t told mcpServerBuilder which transport to use yet. Use useStreamableHttp to select the transport. if (useStreamableHttp) { mcpServerBuilder.WithHttpTransport(o => o.Stateless = true); } else { mcpServerBuilder.WithStdioServerTransport(); } Type Casting to Run Server While configuring an ASP.NET web app, middlewares are added. The HTTP host also needs middleware, and the builder must be cast. After the builder instance is built, the webApp instance adds middleware including the endpoint mapping. IHost app; if (useStreamableHttp) { var webApp = (builder as WebApplicationBuilder)!.Build(); webApp.UseHttpsRedirection(); webApp.MapMcp("/mcp"); app = webApp; } else { var consoleApp = (builder as HostApplicationBuilder)!.Build(); app = consoleApp; } Note that WebApplication implements IHost, so you can assign it to an IHost variable. The console host built from HostApplicationBuilder is already an IHost. Use this app instance to run the MCP server. await app.RunAsync(); That's it! Now you can run the MCP server with the STDIO transport or the HTTP transport by providing a single switch, --http . Sample apps Sample apps are available for you to check out. Visit the MCP Samples in .NET repository, and you'll find MCP server apps. All server apps in the repo support both STDIO and HTTP via the switch. More resources If you'd like to learn more about MCP in .NET, here are some additional resources worth exploring: Let's Learn MCP MCP Workshop in .NET MCP Samples in .NET MCP Samples MCP for BeginnersMCP Bootcamp: APAC, LATAM and Brazil
The Model Context Protocol (MCP) is transforming how AI systems interact with real-world applications. From intelligent assistants to real-time streaming, MCP is already being adopted by leading companies—and now is your chance to get ahead. Join us for a four-part technical series designed to give you practical, production-ready skills in MCP development, integration, and deployment. Whether you're a developer, AI engineer, or cloud architect, this series will equip you with the tools to build and scale MCP-based solutions. 📅 English edition - 6PM IST (India Standard Time) ✅ Register at MCP Bootcamp APAC Session Title Date & Time (IST) Creating Your First MCP Server Learn the fundamental concepts of the protocol and test your implementation using official tools. August 28, 6:00 PM MCP Integration with LLMs Set up an intelligent MCP client that uses LLM to interpret natural commands and integrate everything with VS Code and GitHub Copilot. September 2, 6:00 PM Real-Time with SSE and HTTP Streaming Add real-time communication to your MCP server using Server-Sent Events and streamable HTTP. September 4, 6:00 PM Deploy MCP on Azure Add Real-Time Communication with Server-Sent Events to Your MCP Server and Professionally Deploy on Azure Container Apps. September 9, 6:00 PM 📅 Spanish edition - 9AM CST (Central Standard Time, Mexico City) ✅ Check the time in your location: 11am ET, 8am PT, 9am CST e 5pm CET - Register at MCP Bootcamp LATAM Session Title Date & Time (CST) Creando tu Primer Servidor MCP Construye desde cero un servidor MCP funcional en Python. Aprende los conceptos fundamentales del protocolo y prueba tu implementación usando herramientas oficiales. August 18, 09:00 AM Integración de MCP con LLMs Configura un cliente MCP inteligente que utilice LLM para interpretar comandos en lenguaje natural e intégralo con VS Code y GitHub Copilot. August 20, 09:00 AM MCP en Tiempo Real y Deploy en Azure Agrega comunicación en tiempo real con Server-Sent Events a tu servidor MCP y realiza un despliegue profesional en Azure Container Apps. August 25, 09:00 AM Comunicación en tiempo real con SSE y transmisión HTTP Agrega comunicación en tiempo real con Server-Sent Events a tu servidor MCP y realiza un despliegue profesional en Azure Container Apps. September 1, 09:00 AM 📅 Portuguese edition - 12PM BRT (Brasília Time) ✅ Register at MCP Bootcamp | Brasil Session Title Date & Time (BRT) Criando seu Primeiro MCP Server Construa do zero um servidor MCP funcional em Python. Aprenda os conceitos fundamentais do protocolo e teste sua implementação usando ferramentas oficiais. August 19, 12:00 PM Integração de MCP com LLMs Configure um cliente MCP inteligente que usa LLM para interpretar comandos naturais e integre tudo com VS Code e GitHub Copilot. August 21, 12:00 PM Deploy no Azure Adicione comunicação em tempo real com Server-Sent Events ao seu servidor MCP e faça deploy profissional na Azure Container Apps. August 26, 12:00 PM Comunicação em Tempo Real com SSE e HTTP Streaming Aprenda a adicionar comunicação em tempo real ao seu servidor MCP usando Server-Sent Events (SSE) e streaming HTTP. August 28, 12:00 PM