Agents
89 TopicsDigital Deep Dive: Copilot Control System (CCS)
Join us for two days of insights, demos, and deep dives on Copilot Control System (CCS)! Learn how to secure, manage, and analyze Microsoft 365 Copilot, Copilot Chat, Microsoft Copilot Studio, and agents across your organization using the Copilot Control System (CCS). This two-day digital skilling event kicks off with a guided overview of CCS—your framework for managing Copilot securely, at scale, and with clarity. Day 1 dives into security, governance, and preparing your environment for Copilot and agents. Day 2 shifts to management controls, agent lifecycle, reporting, and user enablement. We’ll close with key takeaways and resources to support your next steps. What to Expect A breakdown of the Copilot Control System and how it applies across workloads Technical guidance for managing oversharing, insider risks, and web search settings Practical ways to build and secure enterprise-scale agents Lifecycle management approaches for Copilot agents and Copilot Studio makers Insights on measurement, analytics, and usage trends Guidance on enabling users and aligning AI adoption with organizational goals Live AMAs with Microsoft product team members throughout the event Day 1: Foundations & Security – Tuesday, June 17, 2025 (PT) Time Session Title Speaker(s) Description 8:00–8:30 AM PT Introduction to Copilot Control System Ben Summers Overview of the event structure, CCS components, and session navigation tips. 8:30–9:30 AM PT Secure Microsoft 365 Copilot and agents Sophie Ke, Dave Minasyan Addressing oversharing risks using SAM and Purview. 9:30–10:30 AM PT Prevent data loss and insider risks for Microsoft 365 Copilot Erica Toelle Using Microsoft Purview to mitigate insider threats and enforce protections. 10:30–11:00 AM PT Understanding web search controls in Microsoft 365 Copilot Alex Pozin, Suhel Parekh Configuring and governing Copilot's web search behavior. 11:00 AM–12:00 PM PT Build enterprise-scale agents securely Mik Ferland Best practices for secure agent lifecycle management and governance. Day 2: Management & Adoption – Wednesday, June 18, 2025 (PT) Time Session Title Speaker(s) Description 8:00–9:00 AM PT Copilot agent management and controls James Bell, Ganesh Krishnamurthy How to manage Copilot agents in the Microsoft 365 admin center. 9:00–10:00 AM PT Empower Copilot Studio makers with enterprise-grade management controls Asaf Tzuk Governance, visibility, and cost management in Copilot Studio. 10:00–11:00 AM PT Measure usage and impact of Copilot and agents Mike Walsh, Samer Baroudi Using Copilot Analytics to assess adoption, usage, and impact. 11:00 AM–12:00 PM PT Practical guidance for AI and collaboration adoption Karuana Gatimu Real-world strategies for enabling collaboration and AI adoption. 12:00–12:15 PM PT That’s a wrap! What’s next for your Copilot Control System journey Efe Abugo Key takeaways, resources, and how to continue learning after the event. How to Participate Register for the Microsoft Tech Community using your email if you haven’t already. This allows you to post comments and ask questions. Visit each individual session page during its scheduled time to join the conversation. You can post your questions in the comments, and product team members will respond live during the AMA. Watch the session live or catch the recording on demand after the event. Keep the conversation going in the Microsoft 365 Copilot Tech Community discussion space after the sessions conclude. It’s a great place to follow up, share what’s working, and connect with others exploring similar topics. Hope to see you there! Come ready to learn and ask our experts all of your burning questions! Access session presentations Looking for session materials or presentations? Visit the new Copilot Control System page on AMC to download resources and explore more content from the event.96KViews37likes68CommentsSecure Microsoft 365 Copilot and agents: Practical steps for addressing oversharing concerns utilizing SAM and Purview
Transformative AI solutions can boost your business’s productivity—and with a few simple steps, you can unlock these advantages without jeopardizing important data. By taking a structured approach to data security and compliance, you can ensure these cutting-edge tools share information safely and appropriately to mitigate risks. Join us at this session to receive thorough technical guidance for preventing data oversharing within your organization. You’ll explore the common causes of oversharing—such as unrestricted privacy settings, broken permission inheritance, and mismanaged domain groups—and how you can securely realize the value of AI with Microsoft 365 Copilot. In this session, you will learn: Immediate steps you can take to rapidly deploy and secure Copilot without the risk of oversharing Methods to ensure your organization is secure, including enhancing your data security stance and boosting visibility into security gaps Policies that reduce the potential for oversharing by lowering data volume and improving compliance How do I participate? No registration is required. Select "Add to calendar" to save the date, select "Attend" to receive event reminders, then join us live on June 17th ready to learn and ask questions! Feel free to post questions and comments below. You can post during the live session and/or in advance if the timing doesn't work for you. Access session presentations Looking for session materials or presentations? Visit the new Copilot Control System page on AMC to download resources and explore more content from the event. This session is part of the Digital Deep Dive: Copilot Control System (CCS). Add it to your calendar, select "Attend" for event reminders, and check out the other sessions! Each session has its own page where the session livestream and discussion space will be available at the start time. You will also be able to view sessions on demand after the event.5.1KViews4likes21CommentsCreate and share Copilot agents in SharePoint in a few clicks
Go behind the scenes to learn more about Copilot agents in SharePoint, powered by your content. What could be better? Copilot agents are built in Copilot Studio via our new, built-in lightweight agent builder experience in SharePoint. CJ and Karuana discuss the broader vision of Copilot agents and show you how to put this new capability to work within your SharePoint sites in Microsoft 365 - including IT management details and licensing insights. Check out the new Microsoft 365 Copilot agents in SharePoint adoption hub: https://aka.ms/SharePoint/agents. In addition, review the recent, related blog, "Microsoft 365 Copilot Wave 2: AI Innovations in SharePoint and OneDrive" by Adam Harmetz and Jason Moore. And review this short explainer video, "Copilot agents in SharePoint" taken from the "Microsoft 365 Copilot: Wave 2" event with Satya Nadella and Jared Spataro event broadcast - watch now:8.7KViews4likes9CommentsFueling the Agentic Web Revolution with NLWeb and PostgreSQL
We’re excited to announce that NLWeb (Natural Language Web), Microsoft’s open project for natural language interfaces on websites now supports PostgreSQL. With this enhancement, developers can leverage PostgreSQL and NLWeb to transform any website into an AI-powered application or Model Context Protocol (MCP) server. This integration allows organizations to utilize a familiar, robust database as the foundation for conversational AI experiences, streamlining deployment and maximizing data security and scalability. Soon, autonomous agents, not just human users, will consume and interpret website content, transforming how information is accessed and utilized online. During Microsoft //Build 2025, Microsoft introduced the era of the open agentic web, in which the internet is an open agentic web a new paradigm in which autonomous agents seamlessly interact across individual, organizational, team and end-to-end business contexts. To realize the future of an open agentic web, Microsoft announced the NLWeb project. NLWeb transforms any website to an AI-powered application with just a few lines of code and by connecting to an AI model and a knowledge base. In this post, we’ll cover: What NLWeb is and how it works with vector databases How pgvector enables vector similarity search in PostgreSQL for NLWeb Get started using NLWeb with Postgres Let’s dive in and see how Postgres + NLWeb can redefine conversational web interfaces while keeping your data in a familiar, powerful database. What is NLWeb? A Quick Overview of Conversational Web Interfaces NLWeb is an open project developed by Microsoft to simplify adding conversational AI interfaces to websites. How NLWeb works under the hood: Processes existing data/website content that exists in semi-structured formats like Schema.org, RSS, and other data that websites already publish Embeds and indexes all the content in a vector store (i.e PostgreSQL with pgvector) Routes user queries through several processes which handle natural langague understanding, reranking and retrieval. Answers queries with an LLM The result is a high-quality natural language interface on top of web data, giving developers the ability to let users “talk to” web data. By default, every NLWeb instance is also a Model Context Protocol (MCP) server, allowing websites to make their content discoverable and accessible to agents and other participants in the MCP ecosystem if they choose. Importantly, NLWeb is platform-agnostic and supports many major operating systems, AI models, and vector stores and the NLWeb project is modular by design, so developers can bring their own retrieval system, model APIs, and define their own extensions. NLWeb with PostgreSQL PostgreSQL is now embedded into the NLWeb reference stack as a native retriever, creating a scalable and flexible path for deploying NLWeb instances using open-source infrastructure. Retrieval Powered by pgvector NLWeb leverages pgvector, a PostgreSQL extension for efficient vector similarity search, to handle natural language retrieval at scale. By integrating pgvector into the NLWeb stack, teams can eliminate the need for external vector databases. Web data stored in PostgreSQL becomes immediately searchable and usable for NLWeb experiences, streamlining infrastructure and enhancing security. PostgreSQL's robust governance features and wide adoption align with NLWeb’s mission to enable conversational AI for any website or content platform. With pgvector retrieval built in, developers can confidently launch NLWeb instances on their own databases no additional infrastructure required. Implementation example We are going to use NLWeb and Postgres, to create a conversational AI app and MCP server that will let us chat with content from the Talking Postgres with Claire Giordano Podcast! Prerequisites An active Azure account. Enable and configure the pg_vector extensions. Create an Azure AI Foundry project. Deploy models gpt-4.1, gpt-4.1-mini and text-embedding-3-small. Install Visual Studio Code. Install the Python extension. Install Python 3.11.x. Install the Azure CLI (latest version). Getting started All the code and sample datasets are available in this GitHub repository. Step 1: Setup NLWeb Server 1. Clone or download the code from the repo. git clone https://github.com/microsoft/NLWeb cd NLWeb 2. Open a terminal to create a virtual python environment and activate it. python -m venv myenv source myenv/bin/activate # Or on Windows: myenv\Scripts\activate 3. Go to the 'code/python' folder in NLWeb to install the dependencies. cd code/python pip install -r requirements.txt 4. Go to the project root folder in NLWeb and copy the .env.template file to a new .env file cd ../../ cp .env.template .env 5. In the .env file, update the API key you will use for your LLM endpoint of choice and update the Postgres connection string. For example: AZURE_OPENAI_ENDPOINT="https://TODO.openai.azure.com/" AZURE_OPENAI_API_KEY="<TODO>" # If using Postgres connection string POSTGRES_CONNECTION_STRING="postgresql://<HOST>:<PORT>/<DATABASE>?user=<USERNAME>&sslmode=require" POSTGRES_PASSWORD="<PASSWORD>" 6. Update your config files (located in the config folder) to make sure your preferred providers match your .env file. There are three files that may need changes. config_llm.yaml: Update the first line to the LLM provider you set in the .env file. By default it is Azure OpenAI. You can also adjust the models you call here by updating the models noted. By default, we are assuming 4.1 and 4.1-mini. config_embedding.yaml: Update the first line to your preferred embedding provider. By default it is Azure OpenAI, using text-embedding-3-small. config_retrieval.yaml: Update the first line to postgres. You should update write_endpoint to postgres and You should update postgres retrieval endpoint is enabled to 'true' in the following list of possible endpoints. Step 2: Initialize Postgres Server Go to the 'code/python/misc folder in NLWeb to run Postgres initializer. NOTE: If you are using Azure Postgres Flexible server make sure you have `vector` extension allow-listed and make sure the database has the vector extension enabled, cd code/python/misc python postgres_load.py Step 3: Ingest Data from Talk Postgres Podcast Now we will load some data in our local vector database to test with. We've listed a few RSS feeds you can choose from below. Go to the 'code/python folder in NLWeb and run the command. The format of the command is as follows (make sure you are still in the 'python' folder when you run this): python -m data_loading.db_load <RSS URL> <site-name> Talking Postgres with Claire Giordano Podcast: python -m data_loading.db_load https://feeds.transistor.fm/talkingpostgres Talking-Postgres (Optional) You can check the documents table in your Postgres database and verify the table looks like the one below. To verify all the data from the website was uploaded. Test NLWeb Server Start your NLWeb server (again from the 'python' folder): python app-file.py Go to http://localhost:8000/ Start ask questions about the Talking Postgres with Claire Giordano Podcast, you may try different modes. Trying List Mode: Sample Prompt: “I want to listen to something that talks about the advances in vector search such as DiskANN” Trying Generate Mode Sample Prompt: “What did Shireesh Thota say about the future of Postgres?” Running NLWeb with MCP 1. If you do not already have it, install MCP in your venv: pip install mcp 2. Next, configure your Claude MCP server. If you don’t have the config file already, you can create the file at the following locations: macOS: ~/Library/Application Support/Claude/claude_desktop_config.json Windows: %APPDATA%\Claude\claude_desktop_config.json The default MCP JSON file needs to be modified as shown below: macOS Example Configuration { “mcpServers”: { “ask_nlw”: { “command”: “/Users/yourname/NLWeb/myenv/bin/python”, “args”: [ “/Users/yourname/NLWeb/code/chatbot_interface.py”, “—server”, “http://localhost:8000”, “—endpoint”, “/mcp” ], “cwd”: “/Users/yourname/NLWeb/code” } } } Windows Example Configuration { “mcpServers”: { “ask_nlw”: { “command”: “C:\\Users\\yourusername\\NLWeb\\myenv\\Scripts\\python”, “args”: [ “C:\\Users\\yourusername\\NLWeb\\code\\chatbot_interface.py”, “—server”, “http://localhost:8000”, “—endpoint”, “/mcp” ], “cwd”: “C:\\Users\\yourusername\\NLWeb\\code” } } } Note: For Windows paths, you need to use double backslashes (\\) to escape the backslash character in JSON. 3. Go to the 'code/python’ folder in NLWeb and run the command. Enter your virtual environment and start your NLWeb local server. Make sure it is configured to access the data you would like to ask about from Claude. # On macOS source ../myenv/bin/activate python app-file.py # On Windows ..\myenv\Scripts\activate python app-file.py 4. Open Claude Desktop. It should ask you to trust the 'ask_nlw' external connection if it is configured correctly. After clicking yes and the welcome page appears, you should see 'ask_nlw' in the bottom right '+' options. Select it to start a query. 5. To query NLWeb, just type 'ask_nlw' in your prompt to Claude. You'll notice that you also get the full JSON script for your results. Remember, you must have your local NLWeb server started to use this option. Learn More Vector Store in Azure Postgres Flexible Server Generative AI in Azure Postgres Flexible Server NLWeb GitHub repo includes: A reference server for handling natural language queries PGvector integrationHow can I measure the quality of my agent's responses?
Welcome back to Agent Support—a developer advice column for those head-scratching moments when you’re building an AI agent! Each post answers a real question from the community with simple, practical guidance to help you build smarter agents. Today’s question comes from someone curious about measuring how well their agent responds: 💬Dear Agent Support My agent seems to be responding well—but I want to make sure I’m not just guessing. Ideally, I’d like a way to check how accurate or helpful its answers really are. How can I measure the quality of my agent’s responses? 🧠 What are Evaluations? Evaluations are how we move from “this feels good” to “this performs well.” They’re structured ways of checking how your agent is doing, based on specific goals you care about. At the simplest level, evaluations help answer: Did the agent actually answer the question? Was the output relevant and grounded in the right info? Was it easy to read or did it ramble? Did it use the tool it was supposed to? That might mean checking if the model pulled the correct file in a retrieval task. Or whether it used the right tool when multiple are registered. Or even something subjective like if the tone felt helpful or aligned with your brand. 🎯 Why Do We Do Evaluations? When you're building an agent, it’s easy to rely on instinct. You run a prompt, glance at the response, and think: “Yeah, that sounds right.” But what happens when you change a system prompt? Or upgrade the model? Or wire in a new tool? Without evaluations, there’s no way to know if things are getting better or quietly breaking. Evaluations help you: Catch regressions early: Maybe your new prompt is more detailed, but now the agent rambles. A structured evaluation can spot that before users do. Compare options: Trying out two different models? Testing a retrieval-augmented version vs. a base version? Evaluations give you a side-by-side look at which one performs better. Build trust in output quality: Whether you're handing this to a client, a customer, or just your future self, evaluations help you say, “Yes, I’ve tested this. Here’s how I know it works.” They also make debugging faster. If something’s off, a good evaluation setup helps you narrow down where it went wrong: Was the tool call incorrect? Was the retrieved content irrelevant? Did the prompt confuse the model? Ultimately, evaluations turn your agent into a system you can improve with intention, not guesswork. ⏳ When Should I Start Evaluating? Short answer: Sooner than you think! You don’t need a finished agent or a fancy framework to start evaluating. In fact, the earlier you begin, the easier it is to steer things in the right direction. Here’s a simple rule of thumb: If your agent is generating output, you can evaluate it. That could be: Manually checking if it answers the user’s question Spotting when it picks the wrong tool Comparing two prompt versions to see which sounds clearer Even informal checks can reveal big issues early before you’ve built too much around a flawed behavior. As your agent matures, you can add more structure: Create a small evaluation set with expected outputs Define categories you want to score (like fluency, groundedness, relevance) Run batch tests when you update a model Think of it like writing tests for code. You don’t wait until the end, you build them alongside your system. That way, every change gets feedback fast. The key is momentum. Start light, then layer on depth as you go. You’ll save yourself debugging pain down the line, and build an agent that gets better over time. 📊 AI Toolkit You don’t need a full evaluation pipeline or scoring rubric on day one. In fact, the best place to begin is with a simple gut check—run a few test prompts and decide whether you like the agent’s response. And if you don’t have a dataset handy, no worry! With the AI Toolkit, you can both generate datasets and keep track of your manual evaluations with the Agent Builder’s Evaluation feature. Sidebar: If you’re curious about deeper eval workflows, like using AI to assist in judging your agent output against a set of evaluators like fluency, relevance Tool Call, or even custom evaluators, we’ll cover that in a future edition of Agent Support. For now, let’s keep it simple! Here’s how to do it: Open the Agent Builder from the AI Toolkit panel in Visual Studio Code. Click the + New Agent button and provide a name for your agent. Select a Model for your agent. Within the System Prompt section, enter: You recommend a movie based on the user’s favorite genre. Within the User Prompt section, enter: What’s a good {{genre}} movie to watch? On the right side of the Agent Builder, select the Evaluation tab. Click the Generate Data icon (the first icon above the table). For the Rows of data to generate field, increase the total to 5. Click Generate. You’re now ready to start evaluating the agent responses! 🧪 Test Before You Build You can run the rows of data either individually or in bulk. I’d suggest starting with a single run to get an initial feel for how the feature works. When you click Run, the agent’s response will appear in the response column. Review the output. In the Manual Evaluation column, select either thumb up or thumb down. You can continue to run the other rows or even add your own row and pass in a value for {{city}}. Want to share the evaluation run and results with a colleague? Click the Export icon to save the run as a .JSONL file. You’ve just taken the first step toward building a more structured, reliable process for evaluating your agent’s responses! 🔁 Recap Here’s a quick rundown of what we covered: Evaluations help you measure quality and consistency in your agent’s responses. They’re useful for debugging, comparing, and iterating. Start early—even rough checks can guide better decisions. The AI Toolkit makes it easier to run and track evaluations right inside your workflow. 📺 Want to Go Deeper? Check out my previous live-stream for AgentHack: Evaluating Agents where I explore concepts and methodologies for evaluating generative AI applications. Although I focus on leveraging the Azure AI Evaluation SDK, it’s still an invaluable intro to learning more about evaluations. The Evaluate and Improve the Quality and Safety of your AI Applications lab from Microsoft Build 2025 provides a comprehensive self-guided introduction to getting started with evaluations. You’ll learn what each evaluator means, how to analyze the scores, and why observability matters—plus how to use telemetry data locally or in the cloud to assess and debug your app’s performance! 👉 Explore the lab: https://github.com/microsoft/BUILD25-LAB334/ And for all your general AI and AI agent questions, join us in the Azure AI Foundry Discord! You can find me hanging out there answering your questions about the AI Toolkit. I'm looking forward to chatting with you there! Whether you’re debugging a tool call, comparing prompt versions, or prepping for production, evaluations are how you turn responses from plausible to dependable.