copilot
54 TopicsFrom Cloud to Chip: Building Smarter AI at the Edge with Windows AI PCs
As AI engineers, we’ve spent years optimizing models for the cloud, scaling inference, wrangling latency, and chasing compute across clusters. But the frontier is shifting. With the rise of Windows AI PCs and powerful local accelerators, the edge is no longer a constraint it’s now a canvas. Whether you're deploying vision models to industrial cameras, optimizing speech interfaces for offline assistants, or building privacy-preserving apps for healthcare, Edge AI is where real-world intelligence meets real-time performance. Why Edge AI, Why Now? Edge AI isn’t just about running models locally, it’s about rethinking the entire lifecycle: - Latency: Decisions in milliseconds, not round-trips to the cloud. - Privacy: Sensitive data stays on-device, enabling HIPAA/GDPR compliance. - Resilience: Offline-first apps that don’t break when the network does. - Cost: Reduced cloud compute and bandwidth overhead. With Windows AI PCs powered by Intel and Qualcomm NPUs and tools like ONNX Runtime, DirectML, and Olive, developers can now optimize and deploy models with unprecedented efficiency. What You’ll Learn in Edge AI for Beginners The Edge AI for Beginners curriculum is a hands-on, open-source guide designed for engineers ready to move from theory to deployment. Multi-Language Support This content is available in over 48 languages, so you can read and study in your native language. What You'll Master This course takes you from fundamental concepts to production-ready implementations, covering: Small Language Models (SLMs) optimized for edge deployment Hardware-aware optimization across diverse platforms Real-time inference with privacy-preserving capabilities Production deployment strategies for enterprise applications Why EdgeAI Matters Edge AI represents a paradigm shift that addresses critical modern challenges: Privacy & Security: Process sensitive data locally without cloud exposure Real-time Performance: Eliminate network latency for time-critical applications Cost Efficiency: Reduce bandwidth and cloud computing expenses Resilient Operations: Maintain functionality during network outages Regulatory Compliance: Meet data sovereignty requirements Edge AI Edge AI refers to running AI algorithms and language models locally on hardware, close to where data is generated without relying on cloud resources for inference. It reduces latency, enhances privacy, and enables real-time decision-making. Core Principles: On-device inference: AI models run on edge devices (phones, routers, microcontrollers, industrial PCs) Offline capability: Functions without persistent internet connectivity Low latency: Immediate responses suited for real-time systems Data sovereignty: Keeps sensitive data local, improving security and compliance Small Language Models (SLMs) SLMs like Phi-4, Mistral-7B, Qwen and Gemma are optimized versions of larger LLMs, trained or distilled for: Reduced memory footprint: Efficient use of limited edge device memory Lower compute demand: Optimized for CPU and edge GPU performance Faster startup times: Quick initialization for responsive applications They unlock powerful NLP capabilities while meeting the constraints of: Embedded systems: IoT devices and industrial controllers Mobile devices: Smartphones and tablets with offline capabilities IoT Devices: Sensors and smart devices with limited resources Edge servers: Local processing units with limited GPU resources Personal Computers: Desktop and laptop deployment scenarios Course Modules & Navigation Course duration. 10 hours of content Module Topic Focus Area Key Content Level Duration 📖 00 Introduction to EdgeAI Foundation & Context EdgeAI Overview • Industry Applications • SLM Introduction • Learning Objectives Beginner 1-2 hrs 📚 01 EdgeAI Fundamentals Cloud vs Edge AI comparison EdgeAI Fundamentals • Real World Case Studies • Implementation Guide • Edge Deployment Beginner 3-4 hrs 🧠 02 SLM Model Foundations Model families & architecture Phi Family • Qwen Family • Gemma Family • BitNET • μModel • Phi-Silica Beginner 4-5 hrs 🚀 03 SLM Deployment Practice Local & cloud deployment Advanced Learning • Local Environment • Cloud Deployment Intermediate 4-5 hrs ⚙️ 04 Model Optimization Toolkit Cross-platform optimization Introduction • Llama.cpp • Microsoft Olive • OpenVINO • Apple MLX • Workflow Synthesis Intermediate 5-6 hrs 🔧 05 SLMOps Production Production operations SLMOps Introduction • Model Distillation • Fine-tuning • Production Deployment Advanced 5-6 hrs 🤖 06 AI Agents & Function Calling Agent frameworks & MCP Agent Introduction • Function Calling • Model Context Protocol Advanced 4-5 hrs 💻 07 Platform Implementation Cross-platform samples AI Toolkit • Foundry Local • Windows Development Advanced 3-4 hrs 🏭 08 Foundry Local Toolkit Production-ready samples Sample applications (see details below) Expert 8-10 hrs Each module includes Jupyter notebooks, code samples, and deployment walkthroughs, perfect for engineers who learn by doing. Developer Highlights - 🔧 Olive: Microsoft's optimization toolchain for quantization, pruning, and acceleration. - 🧩 ONNX Runtime: Cross-platform inference engine with support for CPU, GPU, and NPU. - 🎮 DirectML: GPU-accelerated ML API for Windows, ideal for gaming and real-time apps. - 🖥️ Windows AI PCs: Devices with built-in NPUs for low-power, high-performance inference. Local AI: Beyond the Edge Local AI isn’t just about inference, it’s about autonomy. Imagine agents that: - Learn from local context - Adapt to user behavior - Respect privacy by design With tools like Agent Framework, Azure AI Foundry and Windows Copilot Studio, and Foundry Local developers can orchestrate local agents that blend LLMs, sensors, and user preferences, all without cloud dependency. Try It Yourself Ready to get started? Clone the Edge AI for Beginners GitHub repo, run the notebooks, and deploy your first model to a Windows AI PC or IoT devices Whether you're building smart kiosks, offline assistants, or industrial monitors, this curriculum gives you the scaffolding to go from prototype to production.Essential Microsoft Resources for MVPs & the Tech Community from the AI Tour
Unlock the power of Microsoft AI with redeliverable technical presentations, hands-on workshops, and open-source curriculum from the Microsoft AI Tour! Whether you’re a Microsoft MVP, Developer, or IT Professional, these expertly crafted resources empower you to teach, train, and lead AI adoption in your community. Explore top breakout sessions covering GitHub Copilot, Azure AI, Generative AI, and security best practices—designed to simplify AI integration and accelerate digital transformation. Dive into interactive workshops that provide real-world applications of AI technologies. Take it a step further with Microsoft’s Open-Source AI Curriculum, offering beginner-friendly courses on AI, Machine Learning, Data Science, Cybersecurity, and GitHub Copilot—perfect for upskilling teams and fostering innovation. Don’t just learn—lead. Access these resources, host impactful training sessions, and drive AI adoption in your organization. Start sharing today! Explore now: Microsoft AI Tour Resources.How to Master GitHub Copilot: Build, Prompt, Deploy Smarter
Mastering GitHub Copilot: Build, Prompt, Deploy Smarter is a free, hands-on workshop designed to help developers go beyond autocomplete and unlock the true power of AI-assisted coding. Instead of toy examples, this course walks you through real-world software engineering challenges: messy codebases, multi-language projects, cloud deployments, and legacy system upgrades. You’ll learn practical skills like prompt engineering, advanced Copilot features, and AI pair programming techniques that make you faster, sharper, and more creative. Whether you’re a junior developer or a seasoned architect, mastering GitHub Copilot will help you: Reduce cognitive load and focus on system design Accelerate onboarding for new engineers Write cleaner, more consistent code Automate repetitive tasks to free up time for innovation AI coding tools like GitHub Copilot are no longer optional—they’re essential. This workshop gives you the skills to collaborate with Copilot effectively and stay competitive in the age of AI-powered development.One MCP Server, Two Transports: STDIO and HTTP
Let's think about a situation for using MCP servers. Most MCP servers run on a local machine – either directly or in a container. But with other integration scenarios like using Copilot Studio, enterprise-wide MCP servers or need more secure environments, those MCP server should run remotely through HTTP. As long as the core logic lives in a shared layer, wrapping it in a console (STDIO) or web (HTTP) host is straightforward. However, maintaining two hosts can duplicate code. What if a single MCP server supports both STDIO and HTTP, controlled by a simple switch? It will be reducing significant amount of management overhead. This post shows how to build a single MCP server that supports both transports, selected at runtime with a --http switch, using the .NET builder pattern. .NET Builder Pattern A .NET console app starts the builder pattern using Host.CreateApplicationBuilder(args) . var builder = Host.CreateApplicationBuilder(args); The builder instance is the type of HostApplicationBuilder implementing the IHostApplicationBuilder interface. On the other hand, an ASP.NET web app starts the builder pattern using WebApplication.CreateBuilder(args) . var builder = WebApplication.CreateBuilder(args); This builder instance is the type of WebApplicationBuilder also implementing the IHostApplicationBuilder interface. Now, both builder instances have IHostApplicationBuilder in common, and this is the key of this post today. If we decide the hosting mode before creating the builder instance, the server can run as either STDIO or HTTP. The --http Switch as an Argument As you can see, both Host.CreateApplicationBuilder(args) and WebApplication.CreateBuilder(args) take the list of arguments that are passed from the command-line. Therefore, before initializing the builder instance, we can identify the server type. Let's use a --http switch as the selector. Then pass --http when running the server. dotnet run --project MyMcpServer -- --http Then, before creating the builder instance, check whether the switch is present. It looks for the environment variables first, then checks the arguments passed. public static bool UseStreamableHttp(IDictionary env, string[] args) { var useHttp = env.Contains("UseHttp") && bool.TryParse(env["UseHttp"]?.ToString()?.ToLowerInvariant(), out var result) && result; if (args.Length == 0) { return useHttp; } useHttp = args.Contains("--http", StringComparer.InvariantCultureIgnoreCase); return useHttp; } Here's the usage: var useStreamableHttp = UseStreamableHttp(Environment.GetEnvironmentVariables(), args); We've identified whether to use HTTP or not. Therefore, the builder instance is built in this way: IHostApplicationBuilder builder = useStreamableHttp ? WebApplication.CreateBuilder(args) : Host.CreateApplicationBuilder(args); With this builder instance, we can add more dependencies specific to web app or console app depending on the scenario. The Transport Type Let's add the MCP server to the builder instance. var mcpServerBuilder = builder.Services.AddMcpServer() .WithPromptsFromAssembly() .WithResourcesFromAssembly() .WithToolsFromAssembly(); We haven’t told mcpServerBuilder which transport to use yet. Use useStreamableHttp to select the transport. if (useStreamableHttp) { mcpServerBuilder.WithHttpTransport(o => o.Stateless = true); } else { mcpServerBuilder.WithStdioServerTransport(); } Type Casting to Run Server While configuring an ASP.NET web app, middlewares are added. The HTTP host also needs middleware, and the builder must be cast. After the builder instance is built, the webApp instance adds middleware including the endpoint mapping. IHost app; if (useStreamableHttp) { var webApp = (builder as WebApplicationBuilder)!.Build(); webApp.UseHttpsRedirection(); webApp.MapMcp("/mcp"); app = webApp; } else { var consoleApp = (builder as HostApplicationBuilder)!.Build(); app = consoleApp; } Note that WebApplication implements IHost, so you can assign it to an IHost variable. The console host built from HostApplicationBuilder is already an IHost. Use this app instance to run the MCP server. await app.RunAsync(); That's it! Now you can run the MCP server with the STDIO transport or the HTTP transport by providing a single switch, --http . Sample apps Sample apps are available for you to check out. Visit the MCP Samples in .NET repository, and you'll find MCP server apps. All server apps in the repo support both STDIO and HTTP via the switch. More resources If you'd like to learn more about MCP in .NET, here are some additional resources worth exploring: Let's Learn MCP MCP Workshop in .NET MCP Samples in .NET MCP Samples MCP for BeginnersMCP Bootcamp: APAC, LATAM and Brazil
The Model Context Protocol (MCP) is transforming how AI systems interact with real-world applications. From intelligent assistants to real-time streaming, MCP is already being adopted by leading companies—and now is your chance to get ahead. Join us for a four-part technical series designed to give you practical, production-ready skills in MCP development, integration, and deployment. Whether you're a developer, AI engineer, or cloud architect, this series will equip you with the tools to build and scale MCP-based solutions. 📅 English edition - 6PM IST (India Standard Time) ✅ Register at MCP Bootcamp APAC Session Title Date & Time (IST) Creating Your First MCP Server Learn the fundamental concepts of the protocol and test your implementation using official tools. August 28, 6:00 PM MCP Integration with LLMs Set up an intelligent MCP client that uses LLM to interpret natural commands and integrate everything with VS Code and GitHub Copilot. September 2, 6:00 PM Real-Time with SSE and HTTP Streaming Add real-time communication to your MCP server using Server-Sent Events and streamable HTTP. September 4, 6:00 PM Deploy MCP on Azure Add Real-Time Communication with Server-Sent Events to Your MCP Server and Professionally Deploy on Azure Container Apps. September 9, 6:00 PM 📅 Spanish edition - 9AM CST (Central Standard Time, Mexico City) ✅ Check the time in your location: 11am ET, 8am PT, 9am CST e 5pm CET - Register at MCP Bootcamp LATAM Session Title Date & Time (CST) Creando tu Primer Servidor MCP Construye desde cero un servidor MCP funcional en Python. Aprende los conceptos fundamentales del protocolo y prueba tu implementación usando herramientas oficiales. August 18, 09:00 AM Integración de MCP con LLMs Configura un cliente MCP inteligente que utilice LLM para interpretar comandos en lenguaje natural e intégralo con VS Code y GitHub Copilot. August 20, 09:00 AM MCP en Tiempo Real y Deploy en Azure Agrega comunicación en tiempo real con Server-Sent Events a tu servidor MCP y realiza un despliegue profesional en Azure Container Apps. August 25, 09:00 AM Comunicación en tiempo real con SSE y transmisión HTTP Agrega comunicación en tiempo real con Server-Sent Events a tu servidor MCP y realiza un despliegue profesional en Azure Container Apps. September 1, 09:00 AM 📅 Portuguese edition - 12PM BRT (Brasília Time) ✅ Register at MCP Bootcamp | Brasil Session Title Date & Time (BRT) Criando seu Primeiro MCP Server Construa do zero um servidor MCP funcional em Python. Aprenda os conceitos fundamentais do protocolo e teste sua implementação usando ferramentas oficiais. August 19, 12:00 PM Integração de MCP com LLMs Configure um cliente MCP inteligente que usa LLM para interpretar comandos naturais e integre tudo com VS Code e GitHub Copilot. August 21, 12:00 PM Deploy no Azure Adicione comunicação em tempo real com Server-Sent Events ao seu servidor MCP e faça deploy profissional na Azure Container Apps. August 26, 12:00 PM Comunicação em Tempo Real com SSE e HTTP Streaming Aprenda a adicionar comunicação em tempo real ao seu servidor MCP usando Server-Sent Events (SSE) e streaming HTTP. August 28, 12:00 PMSwagger Auto-Generation on MCP Server
Would you like to generate a swagger.json directly on an MCP server on-the-fly? In many use cases, using remote MCP servers is not uncommon. In particular, if you're using Azure API Management (APIM), Azure API Center (APIC) or Copilot Studio in Power Platform, integrating with remote MCP servers is inevitable.GitHub Copilot Vibe Coding Workshop
Many of us do the vibe coding these days, and GitHub Copilot (GHCP) takes the key role of the vibe coding. You might simply enter prompts to GHCP like "Build a frontend app for a marketplace of camping gear" or even simpler ones like "Give me an app for camping gear marketplace". This surely works. GHCP delivers an app for you. However, the deliverable might be different from what you initially expected. This happens because GHCP fills in uncertainties with its own imagination unless we provide clear and detailed prompts. Let's recall the basics of product lifecycle management (PLM). You're a product owner or product manager about to launch a new product or develop a new business to sell values to your prospective customers. Where would you start from? Yes, it's the fist step to perform market analysis – whether your idea is feasible or not, whether the market is profitable or not, and so on. Then, based on this analysis, you would generate a product requirements document (PRD). The PRD describes what the product or service should be look like, how it should work, what it should deliver. In addition to that, the doc should also contain user stories and acceptance criteria. The user stories define what the app should expect, how it should behave, and what it should return. The acceptance criteria defines how you test the app to accept as a final deliverable. So, is a PRD is important for vibe coding? YES, IT IS! As stated earlier, GHCP tries really hard to fill some missing parts with its full of imagination. Therefore, the more context you provide to GHCP, the better GHCP works more accurately. That's how you get more accurate results from the vibe coding. But how do you actually practise this type of vibe coding? Introducing GitHub Copilot Vibe Coding Workshop I'm more than happy to introduce this GitHub Copilot Vibe Coding Workshop, a resource available for everyone to use. It's based on a typical app development scenario – building a web application that consists of a frontend UI and backend API with database transaction. This workshop has six steps: Analyse a PRD and generate an OpenAPI document from it. Build a FastAPI app in Python based on the OpenAPI doc. Build a React app in JavaScript based on the OpenAPI doc. Migrate the FastAPI app to Spring Boot app in Java. Migrate the React app to Blazor app in .NET. Containerise both the Spring app and the Blazor app, and orchestrate them. This workshop is self-paced so you can complete it in your spare time. It's also designed to run on GitHub Codespaces, since not everyone has all the required development environment set up locally. Throughout this workshop, you'll learn: How to activate GHCP Agent Mode on VS Code, How to customise your GHCP to get the better result, and How to integrate MCP servers for vibe coding. Do you prefer a language other than English? No problem! This workshop provides materials in seven different languages including English, Chinese (Simplified), French, Japanese, Korean, Portuguese and Spanish so you can choose your preferred language to complete the workshop. It's your time for vibe coding! Now it's your turn to try this GitHub Copilot Vibe Coding Workshop on your own, or together with your friends and colleagues. If you have any questions about this workshop, please create an issue in the repository! Want to know more about GitHub Copilot? GitHub Copilot in VS Code GitHub Copilot Agent Mode GitHub Copilot Customisation MCP Server Support in VS CodeLet's Learn - MCP Events: A Beginner's Guide to the Model Context Protocol
The Model Context Protocol (MCP) has rapidly become the industry standard for connecting AI agents to a wide range of external tools and services in a consistent way. In a matter of months, this protocol has become a hot topic in developer events and forums and has been implemented by companies large and small. With such rapid change comes the need for training and upskilling to meet the moment! That's why, we're planning a series of virtual training events across different languages (both natural and programming) to introduce you to MCP. ⭐ Register: https://aka.ms/letslearnmcp 👩💻 Who Should Join? Whether you're a beginner developer, a university student, or a seasoned tech professional, this workshop was designed with you in mind. At each event, experts will guide you through an exciting and beginner-friendly workshop where we'll introduce you to MCP, show you how to build your first server, and answer all your questions along the way. We have an exciting lineup of sessions planned, each focusing on different programming languages and featuring expert presenters. All the events use Visual Studio Code, aside from the July 17th Visual Studio event. Sessions ⭐ You can register for the events here: https://aka.ms/letslearnmcp Date Language Technology Register July 9 English C# https://developer.microsoft.com/reactor/events/26114/ July 15 English Java https://developer.microsoft.com/reactor/events/26115/ July 16 English Python https://developer.microsoft.com/reactor/events/26116/ July 17 English C# + Visual Studio https://developer.microsoft.com/reactor/events/26117/ July 21 English TypeScript https://developer.microsoft.com/reactor/events/26118/ We're also running the event in Spanish, Portuguese, Italian, Korean, Japanese, Chinese, and more. See the event page for more details! Date Language Technology Register July 15 한국어 C# https://developer.microsoft.com/reactor/events/26124/ July 15 日本語 C# https://developer.microsoft.com/reactor/events/26137/ July 17 Español C# https://developer.microsoft.com/reactor/events/26146/ July 18 Tiếng Việt C# https://developer.microsoft.com/reactor/events/26138/ July 18 한국어 JavaScript https://developer.microsoft.com/reactor/events/26121/ July 22 한국어 Python https://developer.microsoft.com/reactor/events/26125/ July 22 Português Java https://developer.microsoft.com/reactor/events/26120/ July 23 中文 C# https://developer.microsoft.com/reactor/events/26142/ July 23 Türkçe C# https://developer.microsoft.com/reactor/events/26139/ July 23 Español JavaScript/ TypeScript https://developer.microsoft.com/reactor/events/26119/ July 23 Português C# https://developer.microsoft.com/reactor/events/26123/ July 24 Deutsch Java https://developer.microsoft.com/reactor/events/26144/ July 24 Italiano Python https://developer.microsoft.com/reactor/events/26145/ Don't miss out on this opportunity to learn about MCP and enhance your skills. Mark your calendars and join us for the Let's Learn - MCP workshops. We look forward to seeing you there! ⭐ Register: https://aka.ms/letslearnmcp Get ready for the event! We recommend you set up your machine prior to the event so that you can follow along with the live session. Ensure you have: Visual Studio Code configured for your chosen programming language Docker Sign up for GitHub Copilot for FREE Check out the MCP For Beginners course If you're completely new to MCP, watch this video for an introduction. Introduction to Model Context Protocol (MCP) Servers | DEM517 But wait, there's more! After the Let's Learn event, you'll be ready to join us for MCP Dev Days on July 29th and 30th. In this two-day virtual event, you'll explore the growing ecosystem around the Model Context Protocol (MCP), a standard that bridges AI models and the tools they rely on. The event will include sessions from MCP experts at Microsoft and beyond. For more information, check out the event page: https://aka.ms/mcpdevdaysCreate a Database Schema and REST APIs with a Single Prompt Using GitHub Copilot in VS Code
The Age of Prompt-Driven Development A significant shift is underway in the way we develop software. AI agents and prompt-based tools are shaping modern development. As a developer, you don’t want to miss this shift. Knowing how to use these tools puts you ahead. Instead of writing endless boilerplate, you can now describe what you want, and AI will generate code, create your database, connect APIs, and even deploy your app. New tools like Cursor, Windsurf, Lovable, and Bolt are rising fast. You can create stunning apps and websites by chatting with AI. Even with all these fancy tools, full-stack apps still need a solid backend, and that means data. Every application needs to work with data. Whether you’re building a blog, a booking platform, or an AI Agent, you’ll need to store and retrieve information. That usually means using a real database like PostgreSQL, MySQL, or MongoDB (unless you’re treating Excel or Google Sheets like a backend, which… we’ve all done once). So schema design, database setup, and API generation can’t be skipped. I decided to experiment with automating the process of designing a database schema, running a database, and managing data using just prompts using GitHub Copilot in VS Code. Working with databases is often repetitive work and slows developers down. I think the issues we always face are the following during the setup of a database from scratch. Every App Needs a Database — It’s Time to Simplify It You start with a manual schema setup You have to create tables, think through relationships, indexes, data types, and naming. You map tables to objects using ORM libraries and build APIs to access that data. It’s easy to miss things or overcomplicate at an early stage. Schema changes are painful Your app evolves. You rename a column, split a table, or add a new relation. Now you need to write migrations. Update your ORM. Avoid downtime. And hope nothing breaks in staging or production. Every change triggers more boilerplate Once the schema changes, you usually: Update model files Fix serializers or DTOs Rewrite REST API endpoints or GraphQL resolvers Modify test data and fixtures That’s a lot of work for just one change. Team coordination becomes tricky In team projects, syncing schema changes between developers often leads to merge conflicts, broken migrations, or inconsistent environments. But now? With the rise of AI code generation tools like GitHub Copilot, you can extend Copilot Chat with the Model Context Protocol (MCP) from external providers, and you can create a fully working database schema with a single prompt — right inside VS Code. And it’ll save you hours every week. Let me show you how you can achieve this. Let’s Build: A Travel Agency App Schema What You Need VS Code (with GitHub Copilot enabled) UV is installed. GibsonAI -Sign up for a free account - This tool turns your prompt into a complete schema, deploys a serverless database, and gives you a live REST API for managing data. Step 1: Set Up GibsonAI CLI and Log In Before using the GibsonAI MCP server, install GibsonAI’s CLI and log in: uvx --from gibson-cli@latest gibson auth login This logs you into your GibsonAI account so you can start using all CLI features. Step 2: Enable MCP Server in VS Code To use the GibsonAI MCP server inside your VS Code project, you’ll need to add a configuration script. Create a file in your project or inside an empty folder called mcp.json in the .vscode/folder. This file defines which GibsonAI MCP server to use for this project. Copy and paste the following content into the .vscode/mcp.json file: { "inputs": [], "servers": { "gibson": { "type": "stdio", "command": "uvx", "args": ["--from", "gibson-cli@latest", "gibson", "mcp", "run"] } } } Once this file is added, GibsonAI tools inside VS Code will connect to the MCP server. Step 3: Describe Your Travel App Schema in a Prompt Open GitHub Copilot Chat in VS Code, switch to Agent mode, and select the LLM model, such as GPT-4.1 or GPT-4o. You should see the available tools from GibsonA Then enter a prompt like this: “Create a database for a travel agency. It should include tables for destinations, bookings, users, and reviews. Each user can make bookings and write reviews. Each destination has a name, description, price, and rating.” GibsonAI reads your prompt, creates a new database project, and magically generates: A complete relational schema Visual Entity-Relationship Diagram (ERD) Proper foreign key constraints UUIDs, timestamps, and standard fields A clean MySQL or Postgres structure Step 4: Deploy Your Schema and Enable CRUD APIs Go to the GibsonAI app, log in, and open your newly created project. There, you can see and review the schema. Now you can click “Deploy” to launch your schema: Alternatively, you can use Copilot chat to deploy the database. GibsonAI hosts the serverless MySQL database. Now you can get the database connection string and connect to your existing app. Or access live CRUD APIs and use them in your app: You now have a working backend without writing a single SQL query. You can plug these APIs directly into your frontend or backend — no need to write REST controllers for typical CRUD operations. GibsonAI also lets me share my database project schema with others. Feel free to clone the travel agency database I created for the demo: https://app.gibsonai.com/clone/rRZ4wD9HDCdHO Step 5: Let Copilot Help You Build Around the API Now that your schema and API are live, use GitHub Copilot to build UI components using React or any other frontend frameworks. GitHub Copilot + GibsonAI MCP = the fastest way to go from prompt to full-featured app. Final Thoughts The future of development is not about using more AI-generated code. It’s about writing fewer, smarter prompts — and letting AI handle the slow, repetitive, or painful tasks so you can fully focus on the innovation. You can already boost your development workflow with GitHub Copilot Agent Mode. It will provide you with a powerful set of tools that enable agents to run SQL queries, create tables, design schemas, import CSV files, and more. Give it a try. The next time you start a project, open VS Code, write a prompt, and let the database build itself. Want to learn more about MCP see the MCP for Beginners resources from Microsoft.How to use Comments as Prompts in GitHub Copilot for Visual Studio
GitHub Copilot is a coding assistant powered by Artificial Intelligence (AI), which can run in various environments and help you be more efficient in your daily coding tasks. In this new short video, Bruno shows you how to use inline comments to generate code with GitHub Copilot.