serverless
218 TopicsAnnouncing Early Preview: BYO Remote MCP Server on Azure Functions
If you’ve already built Model Context Protocol (MCP) servers with the MCP SDKs and wished you could turn them into world class Remote MCP servers using a hyperscale, serverless platform, then this one’s for you! We’ve published samples showing how to host bring‑your-own (BYO) Remote MCP servers on Azure Functions, so you can run the servers you’ve already built with the MCP SDKs—Python, Node, and .NET—with minimal changes and full serverless goodness. Why this is exciting Keep your code. If you’ve already implemented servers with the MCP SDKs (Python, Node, .NET), deploy them to Azure Functions as remote MCP servers with just one line of code change. Serverless scale when you need it. Functions on the Flex Consumption plan handles bursty traffic, scales out and back to zero automatically, and gives you serverless billing. Secure by default. Your remote server endpoint is protected with function keys out-of- the-box, with option to layer on Azure API Management for added authorization flow. BYO vs. Functions Remote MCP extension—pick the path that fits The BYO option complements the existing Azure Functions MCP extension: Build and host with Functions MCP extension: You can build stateful MCP servers with the MCP tool trigger and binding and host them on Functions. Support for SSE is available today with streamable HTTP coming soon. Host BYO remote MCP Server (this announcement): If you already have a server built with the MCP SDKs, or you prefer those SDKs’ ergonomics, host it as‑is on Functions and keep your current codebase. Either way, you benefit from Functions’ serverless platform: secure access & auth, burst scale, event-driven scale from 0 to N, and pay-for-what-you‑use. What’s supported in this early preview Servers built with the Python, Node, and .NET SDKs Debug locally with func start on Visual Studio or Visual Studio Code; deploy with the Azure Developer CLI (azd up) to get your remote MCP server quickly deployed to Azure Functions Stateless servers using the streamable HTTP transport, with guidance coming soon for stateful servers Hosting on Flex Consumption plan Try it now! Python: https://github.com/Azure-Samples/mcp-sdk-functions-hosting-python Node: https://github.com/Azure-Samples/mcp-sdk-functions-hosting-node .NET: https://github.com/Azure-Samples/mcp-sdk-functions-hosting-dotnet Each repo includes the sample weather MCP server implemented with the MCP SDK for that language. You’ll find instructions on how to run the server locally with Azure Functions Core Tools and deploy with azd up in minutes. Once deployed, you can connect to the remote server from an MCP client. The samples use Visual Studio Code, but other clients like Claude can also be used. Provide feedback to shape feature Tell us what you need next - identity flows, diagnostics, more languages, or any other features. Your feedback will shape how we take this early preview to the next level!1.2KViews3likes0CommentsAnnouncing Native Azure Functions Support in Azure Container Apps
Azure Container Apps is introducing a new, streamlined method for running Azure Functions directly in Azure Container Apps (ACA). This integration allows you to leverage the full features and capabilities of Azure Container Apps while benefiting from the simplicity of auto-scaling provided by Azure Functions. With the new native hosting model, you can deploy Azure Functions directly onto Azure Container Apps using the Microsoft.App resource provider by setting “kind=functionapp” property on the container app resource. You can deploy Azure Functions using ARM templates, Bicep, Azure CLI, and the Azure portal. Get started today and explore the complete feature set of Azure Container Apps, including multi-revision management, easy authentication, metrics and alerting, health probes and many more. To learn more, visit: https://aka.ms/fnonacav24.6KViews2likes1CommentStrategic Solutions for Seamless Integration of Third-Party SaaS
Modern systems must be modular and interoperable by design. Integration is no longer a feature, it’s a requirement. Developers are expected to build architectures that connect easily with third-party platforms, but too often, core systems are designed in isolation. This disconnect creates friction for downstream teams and slows delivery. At Microsoft, SaaS platforms like SAP SuccessFactors and Eightfold support Talent Acquisition by handling functions such as requisition tracking, application workflows, and interview coordination. These tools help reduce costs and free up engineering focus for high-priority areas like Azure and AI. The real challenge is integrating them with internal systems such as Demand Planning, Offer Management, and Employee Central. This blog post outlines a strategy centered around two foundational components: an Integration and Orchestration Layer, and a Messaging Platform. Together, these enable real-time communication, consistent data models, and scalable integration. While Talent Acquisition is the use case here, the architectural patterns apply broadly across domains. Whether you're embedding AI pipelines, managing edge deployments, or building platform services, thoughtful integration needs to be built into the foundation, not bolted on later.Building Real-Time AI Apps with Model Context Protocol (MCP) and Azure Web PubSub
Overview Model Context Protocol (MCP) is an open, standardized protocol that allows Large Language Models (LLMs) to interact with external tools and data sources in a consistent, extensible way. It gives models access to capabilities beyond their training data—such as calling APIs or querying databases. MCP builds on the idea of function calling, which lets LLMs invoke external operations by generating JSON payloads that match predefined schemas. While function calling was initially tied to proprietary formats (like OpenAI’s schemas), MCP unifies these patterns under a common JSON-RPC-based protocol. This makes it easier for developers to write tools once and expose them to many LLMs, regardless of vendor. In short: Function calling gave LLMs actions. MCP gives those actions structure, discoverability, and interoperability. In this article, we demonstrate how to give LLMs the ability to broadcast messages to connected web clients through MCP. As going through the article, you will discover that when LLMs have to the ability to publish messages it unlocks useful, real-time app scenarios. An LLM as a collaborative partner in co-editing applications (The app scenario this article explains). An LLM as a proactive, smart data analyst that publishes time-sensitive analysis or alerts. About the whiteboard app The whiteboard this article is based on lets multiple users draw with basic shape tools on a shared canvas. The support for integration with LLMs via MCP even allows users to invite AI as an active participant, unlocking new possibilities for interactive teamwork. Technologies used Web frontend: Vue.js app Backend: Node app using Express Sync drawing activities among web clients: Azure Web PubSub MCP server: Implemented in JavaScript MCP host: VS Code (What this article uses, but you are not limited to it.) For the complete code, please visit GitHub repo. Requirements for following along Node.js, VS Code, An Azure Web PubSub resource. Follow this link to create a resource if you don't have one already. Roadmap ahead Considering the many concepts and moving parts, we break down the article into two parts. Part 1: Run the whiteboard app locally Set up and run the whiteboard app (single user) Run the whiteboard app (multiple users) Part 2: Add MCP support Set up a whiteboard MCP server Configure VS Code to discover the MCP server Part 1: Run the whiteboard app locally Run the whiteboard with a single user Clone the repo, change directory into the whiteboard sample, install app dependencies and build the project git clone https://github.com/Azure/azure-webpubsub.git cd azure-webpubsub/samples/javascript/whiteboard npm install npm run build Provide Azure Web PubSub connection string This applications uses Azure Web PubSub to sync drawing activities among web clients. More on that later. For now, since it's a dependency of the app, we need to supply it. The app uses a connection string to authenticate with Azure Web PubSub. Locate your Azure Web PubSub resource on Azure Portal to find the "connection string" under "Settings > Keys". On Linux, set the environment variable: export Web_PubSub_ConnectionString="<connection_string>" Or, on Windows: SET Web_PubSub_ConnectionString="<connection_string>" Start the app npm run start If you inspect the `server.js` file, you will see that this app is an Express project which serves the web frontend at `port:8080` and handles syncing whiteboard drawing activities among users using Azure Web PubSub. We will explain the syncing part shortly. But for now, you can open your web browser and visit `localhost:8080`, you should see the web frontend. After entering a username, you can play with the whiteboard by drawing a few shapes. Syncing drawing activities When the web frontend is successfully loaded, it goes to the `/negotiate` endpoint to fetch an access token from the Express server to connect with Azure Web PubSub service. The underlying connection between a web client and Azure Web PubSub is a WebSocket connection. This persistent connection allows the web client to send and receive messages to and from Azure Web PubSub service. When you draw on the whiteboard, the client code is written in such a way that it sends every action as a message to the `draw` group in Azure Web PubSub service. Upon receiving the message, Azure Web PubSub broadcasts it to all the connected clients in the `draw` group. In effect, Azure Web PubSub syncs the drawing states of the whiteboard for all the users. // Client code snippets from public/src.index.js // The code uses the `sendToGroup` API offered by Azure Web PubSub diagram.onShapeUpdate((i, m) => ws.sendToGroup('draw', { name: 'updateShape', data: [author, i, m] }, "json")); diagram.onShapePatch((i, d) => ws.sendToGroup('draw', { name: 'patchShape', data: [author, i, d] }, "json", {fireAndForget: true})); In the browser's network panel, you can inspect the WebSocket messages. Run the whiteboard with multiple users To simulate multiple users collaborating on the whiteboard locally, you can open another browser tab. You would expect that the drawing activities in one browser tab should be synced on the other since both whiteboard apps or web clients are connected with Azure Web PubSub service. However, you don't see the syncing happening at the moment. There's nothing wrong with your expectations. The gotcha here is that Azure Web PubSub being a cloud service doesn't know how to reach your web frontend which is running on `localhost`. This is a common pain point experienced by Azure Web PubSub customers. To ensure a smoother developer experience, the team introduced a command line tool to expose localhost so that it's reachable from the internet. Install the tool npm install -g /web-pubsub-tunnel-tool Configure the tool Locate your Web PubSub resource on Azure portal and under "Settings > Settings" to create a new hub named `sample_draw` and configure an event handler as such. URL Template: `tunnel:///eventhandler/{event}` User Event Pattern: `message` System Events: `connect`, `connected`, `disconnected` You should see something like this when you're finished with the configuration. Run the awps-tunnel tool export WebPubSubConnectionString="<connection_string>" awps-tunnel run --hub sample_draw --upstream http://localhost:8080 Now when draw on one whiteboard, the drawing activities should be synced between the two browser tabs. Part 2: Add MCP support Now that we managed to get the collaborative whiteboard running, let's invite an LLM as another collaborator. For an LLM to participate on the whiteboard, the LLM needs the drawing capabilities that human users have access to. As mentioned earlier, MCP makes it easy to provide these capabilities to LLMs. MCP follows the familiar client-server architecture. The server offers capabilities for the clients to request. The MCP client and server communicate following a specially defined protocol. Another important concept is MCP host. An MCP host contains a MCP client and has access to LLMs. The MCP host we are going to use is VS Code Copilot. The flow for the whiteboard looks like the following: We use VS Code Copilot as the MCP host. (No work on our end.) We ask an LLM to draw on the whiteboard. (Using natural language, for example, "draw a house with many trees around it".) The LLM decodes the user's intent, looks at the capabilities included in the prompt as additional context, comes up with a drawing sequence. (No work on our end. VS Code Copilot handles the interactions with LLMs.) The MCP client receives the drawing sequence from the LLM. (No work on our end.) The MCP client requests the MCP server to carry out the drawing sequence. (No work on our end.) The MCP server draws on the whiteboard using Azure Web PubSub. (We need to create the MCP server.) As you can see from the items above, the bulk of the work is done for us by VS Code Copilot's support for MCP. We only need to create the MCP server whose responsibility is to fulfill MCP client requests. Set up MCP server In the whiteboard directory, you will find a `mcpserver` directory. Make sure you are at the root of the whiteboard directory. cd mcpserver npm install If you inspect `mcpserver/index.js`, you will see the server makes one tool, `add_or_update_shape`, available. // Code snippet from `mcpserver/index.js` server.tool( "add_or_update_shape", "Add or update a shape on the whiteboard" // Code omitted for brevity ) The callback provided when registering this tool is the code that will get run when a MCP client requests `add_or_udpate_shape`. It uses Azure Web PubSub's `sendToGroup` API, the same as we saw in Part 1. ws.sendToGroup( "draw", { name: "updateShape", data: ["AI", id, message], }, "json", ); That's it. After installing the dependencies, this server is ready to run and serve MCP clients. Since VS Code Copilot can automatically start the server for us, we don't need to do anything other than installing the dependencies. Configure VS Code to discover the whiteboard MCP server VS Code has great support for MCP. In the command panel, follow the UI to add an MCP server. Select `Command (stdio)` as the type of MCP server to add. In our example, the MCP server and MCP client communicates over standard input/output. As you can see from the dropdown, you can also add MCP servers that communicate with MCP clients over HTTP. But that's beyond the scope of this article. Paste in the full path to `mcpserver/index.js`. When you are finished, you will see a new file is created for you. VS Code Copilot will use this file to discover MCP servers and run them if necessary. Since the whiteboard MCP server is a Node project, the command is set to `node` with the path to the server code file as an argument. Open VS Code Copilot panel and switch to agent mode. MCP is available under agent mode. The model I chose is `GPT-4o`. Click on the wrench icon to configure tools. You will see a dropdown appearing from the command panel, which lists the tools VS Code has discovered. After clicking "OK", VS Code will start the MCP server for you if it's not already. You can verify that the whiteboard MCP server is running by listing the servers. Enjoy the fruit of labor If you've come this far in the article, it's time to enjoy our fruit of labor. One thing to highlight is that the actual drawing actions are performed by the MCP servers and the drawing instructions are delivered through Azure Web PubSub service which maintains a persistent connection with every whiteboard web client. VS Code Copilot as the MCP host facilitates the communication among model, MCP Client and MCP Server. Here's a drawing GPT-4o produced for the ask "Draw a meadow with many trees and sheep. Also, make sure there's a winding river running through it." Impressive! Recap In this article, we demonstrated a scenario where an LLM can be invited by human users to participate on a whiteboard. But one can imagine that LLM can take a more proactive role or as people like to say to exhibit more agentic behaviors. Take a typical monitoring system as an example. Existing monitoring systems are based on “fixed metrics” - when certain pre-defined rules are met, alerts are sent out as notifications or to a dashboard. An agent can complem ent the fixed metric approach with a more flexible and potentially more powerful workflow – intelligently analyzing fresh and historical data, deriving insights or alerts, and delivering them to users in real-time.1.5KViews5likes2CommentsIntroducing Azure AI Travel Agents: A Flagship MCP-Powered Sample for AI Travel Solutions
We are excited to introduce AI Travel Agents, a sample application with enterprise functionality that demonstrates how developers can coordinate multiple AI agents (written in multiple languages) to explore travel planning scenarios. It's built with LlamaIndex.TS for agent orchestration, Model Context Protocol (MCP) for structured tool interactions, and Azure Container Apps for scalable deployment. TL;DR: Experience the power of MCP and Azure Container Apps with The AI Travel Agents! Try out live demo locally on your computer for free to see real-time agent collaboration in action. Share your feedback on our community forum. We’re already planning enhancements, like new MCP-integrated agents, enabling secure communication between the AI agents and MCP servers and more. NOTE: This example uses mock data and is intended for demonstration purposes rather than production use. The Challenge: Scaling Personalized Travel Planning Travel agencies grapple with complex tasks: analyzing diverse customer needs, recommending destinations, and crafting itineraries, all while integrating real-time data like trending spots or logistics. Traditional systems falter with latency, scalability, and coordination, leading to delays and frustrated clients. The AI Travel Agents tackles these issues with a technical trifecta: LlamaIndex.TS orchestrates six AI agents for efficient task handling. MCP equips agents with travel-specific data and tools. Azure Container Apps ensures scalable, serverless deployment. This architecture delivers operational efficiency and personalized service at scale, transforming chaos into opportunity. LlamaIndex.TS: Orchestrating AI Agents The heart of The AI Travel Agents is LlamaIndex.TS, a powerful agentic framework that orchestrates multiple AI agents to handle travel planning tasks. Built on a Node.js backend, LlamaIndex.TS manages agent interactions in a seamless and intelligent manner: Task Delegation: The Triage Agent analyzes queries and routes them to specialized agents, like the Itinerary Planning Agent, ensuring efficient workflows. Agent Coordination: LlamaIndex.TS maintains context across interactions, enabling coherent responses for complex queries, such as multi-city trip plans. LLM Integration: Connects to Azure OpenAI, GitHub Models or any local LLM using Foundy Local for advanced AI capabilities. LlamaIndex.TS’s modular design supports extensibility, allowing new agents to be added with ease. LlamaIndex.TS is the conductor, ensuring agents work in sync to deliver accurate, timely results. Its lightweight orchestration minimizes latency, making it ideal for real-time applications. MCP: Fueling Agents with Data and Tools The Model Context Protocol (MCP) empowers AI agents by providing travel-specific data and tools, enhancing their functionality. MCP acts as a data and tool hub: Real-Time Data: Supplies up-to-date travel information, such as trending destinations or seasonal events, via the Web Search Agent using Bing Search. Tool Access: Connects agents to external tools, like the .NET-based customer queries analyzer for sentiment analysis, the Python-based itinerary planning for trip schedules or destination recommendation tools written in Java. For example, when the Destination Recommendation Agent needs current travel trends, MCP delivers via the Web Search Agent. This modularity allows new tools to be integrated seamlessly, future-proofing the platform. MCP’s role is to enrich agent capabilities, leaving orchestration to LlamaIndex.TS. Azure Container Apps: Scalability and Resilience Azure Container Apps powers The AI Travel Agents sample application with a serverless, scalable platform for deploying microservices. It ensures the application handles varying workloads with ease: Dynamic Scaling: Automatically adjusts container instances based on demand, managing booking surges without downtime. Polyglot Microservices: Supports .NET (Customer Query), Python (Itinerary Planning), Java (Destination Recommandation) and Node.js services in isolated containers. Observability: Integrates tracing, metrics, and logging enabling real-time monitoring. Serverless Efficiency: Abstracts infrastructure, reducing costs and accelerating deployment. Azure Container Apps' global infrastructure delivers low-latency performance, critical for travel agencies serving clients worldwide. The AI Agents: A Quick Look While MCP and Azure Container Apps are the stars, they support a team of multiple AI agents that drive the application’s functionality. Built and orchestrated with Llamaindex.TS via MCP, these agents collaborate to handle travel planning tasks: Triage Agent: Directs queries to the right agent, leveraging MCP for task delegation. Customer Query Agent: Analyzes customer needs (emotions, intents), using .NET tools. Destination Recommendation Agent: Suggests tailored destinations, using Java. Itinerary Planning Agent: Crafts efficient itineraries, powered by Python. Web Search Agent: Fetches real-time data via Bing Search. These agents rely on MCP’s real-time communication and Azure Container Apps’ scalability to deliver responsive, accurate results. It's worth noting though this sample application uses mock data for demonstration purpose. In real worl scenario, the application would communicate with an MCP server that is plugged in a real production travel API. Key Features and Benefits The AI Travel Agents offers features that showcase the power of MCP and Azure Container Apps: Real-Time Chat: A responsive Angular UI streams agent responses via MCP’s SSE, ensuring fluid interactions. Modular Tools: MCP enables tools like analyze_customer_query to integrate seamlessly, supporting future additions. Scalable Performance: Azure Container Apps ensures the UI, backend and the MCP servers handle high traffic effortlessly. Transparent Debugging: An accordion UI displays agent reasoning providing backend insights. Benefits: Efficiency: LlamaIndex.TS streamlines operations. Personalization: MCP’s data drives tailored recommendations. Scalability: Azure ensures reliability at scale. Thank You to Our Contributors! The AI Travel Agents wouldn’t exist without the incredible work of our contributors. Their expertise in MCP development, Azure deployment, and AI orchestration brought this project to life. A special shoutout to: Pamela Fox – Leading the developement of the Python MCP server. Aaron Powell and Justin Yoo – Leading the developement of the .NET MCP server. Rory Preddy – Leading the developement of the Java MCP server. Lee Stott and Kinfey Lo – Leading the developement of the Local AI Foundry Anthony Chu and Vyom Nagrani – Leading Azure Container Apps roadmap Matt Soucoup and Julien Dubois – Leading the ACA DevRel strategy Wassim Chegham – Architected MCP and backend orchestration. And many more! See the GitHub repository for all contributors. Thank you for your dedication to pushing the boundaries of AI and cloud technology! Try It Out Experience the power of MCP and Azure Container Apps with The AI Travel Agents! Try out live demo locally on your computer for free to see real-time agent collaboration in action. Conclusion Developers can explore today the open-source project on GitHub, with setup and deployment instructions. Share your feedback on our community forum. We’re already planning enhancements, like new MCP-integrated agents, enabling secure communication between the AI agents and MCP servers and more. This is still a work in progress and we also welcome all kind of contributions. Please fork and star the repo to stay tuned for updates! ◾️We would love your feedback and continue the discussion in the Azure AI Foundry Discord aka.ms/foundry/discord On behalf of Microsoft DevRel Team.Azure Functions Flex Consumption is now generally available
We are excited to announce that Azure Functions Flex Consumption is now generally available. This hosting plan provides the highest performance for Azure Functions with concurrency-based scaling for both HTTP and non-HTTP triggers, scale from zero to 1000 instances, and no cold start with the Always Ready feature. Flex Consumption also allows you to enjoy seamless integration with your virtual network at no extra cost, ensuring secure and private communication, with no considerable impact to your app’s scale out performance. Learn more about How to achieve high HTTP scale with Azure Functions Flex Consumption, the engineering innovation behind it, and project Legion, the platform behind Flex Consumption. In addition to the fast scaling based on per-instance concurrency, you can choose between 2048MB and 4096MB instance sizes. As the function app receives requests it will automatically scale from zero to as many instances of that instance size as needed based on per instance concurrency, and back to zero for cost efficiency when there’s no more requests to process. You can also take advantage of the built-in integration with Azure Load Testing and the Performance Optimizer to optimize your HTTP functions for performance and cost. Flex Consumption is now generally available for .NET 8 on the isolated worker model, Java 11, Java 17, Node 20, PowerShell 7.4, Python 3.10, and Python 3.11 in Australia East, East Asia, East US, North Europe, Southeast Asia, Sweden Central, UK South, and West US 2, and in preview in East US 2, South Central US, and West US 3. By December 9th 2024, .NET 9 will also generally available in Australia East, East Asia, East US, North Europe, Southeast Asia, Sweden Central, and UK South. Besides the currently supported DevOps and dev tools like VS Code, Java tooling, Azure Pipeline tasks, and GitHub Actions, you can now use the latest Visual Studio 2022 v17.12 update or newer to create and publish to Flex Consumption apps. The Flex Consumption plan offers competitive pricing with flexible options to fit your needs, with GA pricing taking effect on December 1, 2024. For detailed pricing information, please refer to the pricing page. Customer adoption and scenarios We have been working with several internal and external customers during the public preview period, with hundreds of external customers actively using Flex Consumption. “ At Yggdrasil, we immediately started adopting Flex Consumption functions when they went into public preview, as they offer the combination of cost-efficiency, scalability, and security features we need to run our company. We already have 100 Flex Consumption functions running in production, and expect to move at least another 50 functions, now that the product has reached GA. We migrated to Flex from consumption to have VNet integration and private endpoints. – Andreas Strandfelt, Partner & Senior Cloud Specialist at Yggdrasil Commodities ApS “ What really matters to us is that the app scales up and down based on demand. Azure Functions Flex Consumption is very appealing to us because of how it dynamically scales based on the number of messages that are queued up in Azure Event Hubs – Stephan Miehe, GitHub Senior Director. Public case study “ Microsoft AI We had a need to process a large queue, representing a significant volume of data with inconsistent availability. Azure Functions Flex Consumption dramatically simplified the code footprint needed to perform this embarrassingly parallel task and helped us complete it in a much shorter timeframe that we had expected. – Craig Presti, Office of the CTO, Microsoft AI project “ Going Forward In the upcoming months we look forward to rolling out even more features to Flex Consumption, including: Availability zones: Enabling availability zones will be possible for new and existing Flex Consumption apps 512 MB instance size: We will introduce a new, smaller instance size for more granular control Enhanced tooling support: PowerShell modules and Terraform AzureRM support New language versions: Support for the latest language versions like Node 22, Python 3.12, and Java 21 Expanded regional availability: The number of regions will continue to expand in early 2025 with UAE North, Centra US, West US 3, South Central US, East US 2, West US, Canada Central, France Central, and Norway East coming first Metrics support: Full Azure Monitor metrics support for Flex Consumption apps Deployment improvements: Zero-downtime deployment to ensure no disruption to running executions More triggers: Kafka and SQL triggers Closing features: Addressing the limitations identified in Considerations. Please let us know which ones are most important to you! Get Started! Explore our reference samples, quickstarts, and comprehensive documentation to get started with the Azure Functions Flex Consumption hosting plan today!9.4KViews2likes24CommentsAnnouncing Public Preview of Azure Functions Flex Consumption
We're thrilled to announce the Public Preview of Azure Functions Flex Consumption, a new Linux-based hosting plan which offers the features in Consumption that you have been waiting for: networking and fast scaling features on a serverless billing model!17KViews4likes11CommentsCreate a Database Schema and REST APIs with a Single Prompt Using GitHub Copilot in VS Code
The Age of Prompt-Driven Development A significant shift is underway in the way we develop software. AI agents and prompt-based tools are shaping modern development. As a developer, you don’t want to miss this shift. Knowing how to use these tools puts you ahead. Instead of writing endless boilerplate, you can now describe what you want, and AI will generate code, create your database, connect APIs, and even deploy your app. New tools like Cursor, Windsurf, Lovable, and Bolt are rising fast. You can create stunning apps and websites by chatting with AI. Even with all these fancy tools, full-stack apps still need a solid backend, and that means data. Every application needs to work with data. Whether you’re building a blog, a booking platform, or an AI Agent, you’ll need to store and retrieve information. That usually means using a real database like PostgreSQL, MySQL, or MongoDB (unless you’re treating Excel or Google Sheets like a backend, which… we’ve all done once). So schema design, database setup, and API generation can’t be skipped. I decided to experiment with automating the process of designing a database schema, running a database, and managing data using just prompts using GitHub Copilot in VS Code. Working with databases is often repetitive work and slows developers down. I think the issues we always face are the following during the setup of a database from scratch. Every App Needs a Database — It’s Time to Simplify It You start with a manual schema setup You have to create tables, think through relationships, indexes, data types, and naming. You map tables to objects using ORM libraries and build APIs to access that data. It’s easy to miss things or overcomplicate at an early stage. Schema changes are painful Your app evolves. You rename a column, split a table, or add a new relation. Now you need to write migrations. Update your ORM. Avoid downtime. And hope nothing breaks in staging or production. Every change triggers more boilerplate Once the schema changes, you usually: Update model files Fix serializers or DTOs Rewrite REST API endpoints or GraphQL resolvers Modify test data and fixtures That’s a lot of work for just one change. Team coordination becomes tricky In team projects, syncing schema changes between developers often leads to merge conflicts, broken migrations, or inconsistent environments. But now? With the rise of AI code generation tools like GitHub Copilot, you can extend Copilot Chat with the Model Context Protocol (MCP) from external providers, and you can create a fully working database schema with a single prompt — right inside VS Code. And it’ll save you hours every week. Let me show you how you can achieve this. Let’s Build: A Travel Agency App Schema What You Need VS Code (with GitHub Copilot enabled) UV is installed. GibsonAI -Sign up for a free account - This tool turns your prompt into a complete schema, deploys a serverless database, and gives you a live REST API for managing data. Step 1: Set Up GibsonAI CLI and Log In Before using the GibsonAI MCP server, install GibsonAI’s CLI and log in: uvx --from gibson-cli@latest gibson auth login This logs you into your GibsonAI account so you can start using all CLI features. Step 2: Enable MCP Server in VS Code To use the GibsonAI MCP server inside your VS Code project, you’ll need to add a configuration script. Create a file in your project or inside an empty folder called mcp.json in the .vscode/folder. This file defines which GibsonAI MCP server to use for this project. Copy and paste the following content into the .vscode/mcp.json file: { "inputs": [], "servers": { "gibson": { "type": "stdio", "command": "uvx", "args": ["--from", "gibson-cli@latest", "gibson", "mcp", "run"] } } } Once this file is added, GibsonAI tools inside VS Code will connect to the MCP server. Step 3: Describe Your Travel App Schema in a Prompt Open GitHub Copilot Chat in VS Code, switch to Agent mode, and select the LLM model, such as GPT-4.1 or GPT-4o. You should see the available tools from GibsonA Then enter a prompt like this: “Create a database for a travel agency. It should include tables for destinations, bookings, users, and reviews. Each user can make bookings and write reviews. Each destination has a name, description, price, and rating.” GibsonAI reads your prompt, creates a new database project, and magically generates: A complete relational schema Visual Entity-Relationship Diagram (ERD) Proper foreign key constraints UUIDs, timestamps, and standard fields A clean MySQL or Postgres structure Step 4: Deploy Your Schema and Enable CRUD APIs Go to the GibsonAI app, log in, and open your newly created project. There, you can see and review the schema. Now you can click “Deploy” to launch your schema: Alternatively, you can use Copilot chat to deploy the database. GibsonAI hosts the serverless MySQL database. Now you can get the database connection string and connect to your existing app. Or access live CRUD APIs and use them in your app: You now have a working backend without writing a single SQL query. You can plug these APIs directly into your frontend or backend — no need to write REST controllers for typical CRUD operations. GibsonAI also lets me share my database project schema with others. Feel free to clone the travel agency database I created for the demo: https://app.gibsonai.com/clone/rRZ4wD9HDCdHO Step 5: Let Copilot Help You Build Around the API Now that your schema and API are live, use GitHub Copilot to build UI components using React or any other frontend frameworks. GitHub Copilot + GibsonAI MCP = the fastest way to go from prompt to full-featured app. Final Thoughts The future of development is not about using more AI-generated code. It’s about writing fewer, smarter prompts — and letting AI handle the slow, repetitive, or painful tasks so you can fully focus on the innovation. You can already boost your development workflow with GitHub Copilot Agent Mode. It will provide you with a powerful set of tools that enable agents to run SQL queries, create tables, design schemas, import CSV files, and more. Give it a try. The next time you start a project, open VS Code, write a prompt, and let the database build itself. Want to learn more about MCP see the MCP for Beginners resources from Microsoft.Microsoft Build 2024: Essential Guide for AI Developers at Startups and Cloud-First Companies
Generative AI is advancing fast, with OpenAI’s GPT-4o leading the way. GPT-4o boasts improved multilingual understanding, faster responses, lower costs, and real-time processing of text, audio, and images. This boosts new Generative AI (GenAI) use cases. Explore cutting-edge solutions like models, frameworks, vector databases, and LLM observability platforms. Born-in-the-cloud companies are at the forefront of this AI revolution. Be part of the future at Microsoft Build 2024!