Blog Post

Public Sector Blog
8 MIN READ

Creating an AI Policy Analysis Copilot

TimMeyers's avatar
TimMeyers
Icon for Microsoft rankMicrosoft
Aug 11, 2025

Post 3 of 3 in the “AI + Policy Analysis” Series 

In April 2025, the White House Office of Science and Technology Policy released over 10,000 public comments in response to its AI Action Plan. These comments—ranging from a few words to over 40,000—offer a rare and powerful snapshot of how Americans feel about the future of artificial intelligence. 

But how do you make sense of 4.5 million diverse and opinionated, unstructured words? That’s where Gen AI comes in. 

This blog series is for data scientists, software developers, and government officials—anyone looking to use AI not just for insight, but for efficiency. Whether you're analyzing public feedback, internal reports, or customer input, the ability to turn massive volumes of text into actionable insight—fast—is a game-changer. 

In the first post of this series, we explored how Gen AI can help us listen at scale—transforming over 10,000 public comments on the White House AI Action Plan into structured, scored summaries using LLMs and Azure Functions. In the second post, we built a knowledge graph using Microsoft’s GraphRAG to connect ideas and uncover deeper insights.   

In this final post, I am joined by my colleague Todd Uhl, an AI Business Applications Expert with deep experience in Copilot and Copilot Studio. While I approached this project from a pro-code developer perspective, Todd brought a low-code lens—allowing us to meet in the middle and build a solution that’s both technically robust and accessible to a broader audience. Together, we move from connecting to empowering. We’ll show how to make these insights available to users—right inside the tools they use every day. 

 

Case study and prereleased product disclaimers: This document is for informational purposes only. Some information relates to pre-released product which may be substantially modified before it’s commercially released. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS SUMMARY AND WITH RESPECT TO THE INFORMATION PROVIDED HERE. 

 

📣 Catch Up on the Entire Series

Explore how generative AI is transforming public sector insights—from listening at scale to shaping policy with precision.

Architecture Overview 

Once I had the knowledge graph in place, the next challenge was making it usable—by real people, in real workflows. Analysts and policymakers don’t want to comb through graphs or JSON—they want answers, summaries, and reports they can act on. 

To bridge that gap, we built a lightweight server, hosted in an Azure Container App, that exposes a set of tools for querying and retrieving insights. These tools are designed to be called by LLM agents like Microsoft Copilot, enabling users to interact with the graph through natural language. 

 

System architecture for surfacing GraphRAG insights via a standards-based tool server and Copilot.

The server exposes multiple tools that can be discovered and consumed by LLMs, but this post focuses on two: 

  • aiap_report_request: Accepts a user’s query and initiates a report generation process. 
  • aiap_get_report: Allows the user to retrieve the completed report once it’s ready. 

This decoupled design is essential. GraphRAG queries speeds can vary and may take several minutes depending on complexity and configuration. By separating request and retrieval, we ensure responsiveness and scalability. 

Why we used an MCP Server 

The server we built follows the Model Context Protocol (MCP)—an emerging open standard for LLM tool portability. MCP defines a consistent way to expose tools so that any LLM agent (Copilot, ChatGPT, OSS agents, etc.) can discover and invoke them in a predictable way. We chose MCP for three reasons: 

  • Tool Portability – We can define tools once and use them across multiple LLM ecosystems. Whether the user is in Microsoft Copilot or a custom agent, the same tool definitions and semantics apply. 
  • Asynchronous Workflows – MCP supports asynchronous tool patterns natively. That’s perfect for GraphRAG, where some queries take seconds and others take minutes. 
  • Future Flexibility – MCP delivers a future-ready foundation. We can evolve this system to support Teams bots, Power BI dashboards, or external-facing portals—without rewriting the tool layer. 

Step 1: Exposing the tools with MCP Server 

To make the insights system accessible to AI agents, we built the server using FastMCP 2.0—a Pythonic framework for building MCP servers quickly and cleanly. FastMCP handles all the boilerplate and protocol details, so we could focus on defining the tools and wiring them to the backend. 

Here’s how we registered the two tools that power the experience: 

 

# MCP Server Tools
@mcp.tool( name="aiap_report_request",  
  description="Submit an AI policy response report request for analysis and processing" 
) 
def aiap_report_request_mcp(query: str = "Analyze AI policy responses") -> str: 
  return aiap_report_request(query) # calls out to Lazy GraphRAG packages 

@mcp.tool( name="aiap_get_report",  
  description="Get the status and result of an AI policy response report request" 
) 
def aiap_get_report_mcp(request_id: str) -> str: 
  return aiap_get_report(request_id) # calls out to Lazy GraphRAG packages 

 

These tools wrap calls to the LazyGraphRAG package, which handles the actual report generation and retrieval using the knowledge graph. The aiap_report_request tool submits a query to the backend, and if the report can be generated in under 10 seconds, it returns the result immediately. Otherwise, it returns a request ID that can be used later with aiap_get_report. 

FastMCP made this incredibly simple. I didn’t need to write any routing logic or OpenAPI definitions manually—just decorate the functions with annotations and run the server. 

Step 2: Deploying to the cloud 

Once our FastMCP server was working locally, we needed to make it accessible for use by AI agents. We chose to containerize the server and deploy it using Azure Container Apps with the Azure Developer CLI (azd). 

First, we had to create a quick Dockerfile to generate a container image for my server. We used a slim Python 3.11 base image and created a non-root user for security. The Dockerfile installs dependencies, copies in the server and app code, and exposes everything on port 8080. 

# simplified Dockerfile example 
 
FROM python:3.11.8-slim-bookworm 
WORKDIR /app 
COPY requirements.txt . 
RUN pip install --no-cache-dir -r requirements.txt 
COPY .env .env 
COPY server.py . 
COPY app/ ./app/ 
USER mcpuser 
EXPOSE 8080 
CMD ["python", "server.py"] 

Then, we used azd with Bicep templates to provision everything: a container app, container registry, managed identity, and a dedicated workload profile for performance. Using azd allowed us to build and push the container image, provision all the cloud infrastructure, and deploy the app – in a single step. Within minutes, the MCP server was live and ready to serve the tools for use by AI Agents. 

Step 3: Integrating with Microsoft 365 Copilot & Teams 

With the MCP server live, we were able to follow these straightforward instructions to integrate it with a Copilot Custom Agent. We created a simple plugin manifest to create a Custom Connector and added the MCP Server as a tool for the agent. 

swagger: '2.0' 
info: 
  title: AI Policy Analyst MCP Server 
  description: >- 
    This MCP Server provides tools to generate insights from the 
    public’s responses to the AI Action Plan. 
  version: 1.0.0 
host: aipolicymcpapp.<unique-id-and-location>.azurecontainerapps.io 
basePath: /mcp/ 
schemes: 
  - https 
paths: 
  /: 
    post: 
      summary: AI Policy Analyst MCP Server 
      x-ms-agentic-protocol: mcp-streamable-1.0 
      operationId: InvokeServer 
      responses: 
        '200': 
          description: Immediate Response </unique-id-and-location>

 

Copilot Studio’s low code agent definition UI makes set up very simple. The chat interface guides you through the bulk of the configuration options. 

 

Copilot Studio: Creating a custom "AI Policy Analyst Agent"

 

 

Using the plugin manifest, we were able to add our MCP server as a tool for use by the agent. Additionally, due to the asynchronous nature of long-running GraphRAG queries, we added a “Add Delay” tool that the agent could use to wait in between report status checks. 

 

Adding the MCP Server as a 'tool' available to the custom agent. The OpenAPI spec swagger file makes the custom connection configuration very quick.

 

Now that our custom agent has access to the knowledge graph, we were ready to test. Copilot Studio’s native testing and Activity map interface provides a quick understanding of the actions that the agent is taking to respond to a request. 

 

Copilot Studio's testing and Activity map makes brings visibility to the agent thought process.

 
With our agent behaving how we wanted it to, we could easily publish it and make it available to users across Microsoft 365 and Teams. From the Copilot Studio “Channels” configuration page, we could customize the appearance – icon, color, description – and select which channels we wanted to publish to. After approval by our environment Administrator, our AI Policy Analyst Agent was ready to be shared with our organization’s users, right in the tools they use every day.  

 

Example custom Copilot deployed in M365 Copilot interface (Click to zoom)Example custom Copilot deployed in Microsoft Teams interface (Click to zoom)

 

Bonus: Using our MCP Server in other MCP clients 

While Microsoft 365 Copilot was our primary target, one of the biggest advantages of using MCP is portability. We were able to use the same MCP Server in other clients with zero changes to the tool definitions. All we had to do was point the client (or agent) to our MCP Server. 

We tested multiple clients and SDKs, including GitHub Copilot, AI Foundry Agent Service, Semantic Kernel, Autogen, and Claude Desktop. 

Wrapping Up the Series 

Across this three-part series, I’ve taken 10,000+ public comments on the White House AI Action Plan and turned them into actionable insight. 

In Blog Post 1, I built a scalable pipeline using Gen AI and Azure Functions to process and summarize the data. In Blog Post 2, I constructed a metadata-rich knowledge graph using GraphRAG to surface deeper patterns and themes. 

And now, in Blog Post 3, we exposed those insights to users—embedding them into Microsoft 365 Copilot and Teams via a FastMCP server. Where in Blog Post 1, I used Copilot to generate exploratory questions, a real policy analyst or decision-maker can now interact with the system using their own expertise and experience—asking the questions that matter most to them, in their own words. 

This project has been a journey from raw text to real answers. From listening at scale to empowering action. And while the tools and technology can be exciting, the real impact comes from making public feedback accessible to the people who need it most. 

We hope this series inspires others to explore how Gen AI can turn public input into public impact. 

Key Takeaways 

  • Exposed GraphRAG insights using a standards-based MCP server. 
  • Implemented two tools: aiap_report_request and aiap_get_report. 
  • Integrated these tools into Microsoft Copilot, enabling users to generate and retrieve reports directly in their workflow. 
  • Deployed the system using Azure Container Apps and azd, making it scalable and portable. 
Updated Aug 11, 2025
Version 1.0
No CommentsBe the first to comment