agents
9 TopicsCopilot Control System innovations in the Microsoft 365 Copilot Wave 2 Spring release
Today’s Microsoft 365 Copilot Wave 2 Spring release highlights new capabilities for the new era of human-agent collaboration. Teams are now empowered to address unique business needs by creating and customizing secure, enterprise-grade agents faster and more easily than ever—and as an IT pro you’re probably wondering how you’re going to manage that. We hear you, and are excited to share new capabilities in the Copilot Control System (CCS) to help IT administrators and security professionals start solving the challenges of cost management, security and governance, and lifecycle management so that you can effectively secure, measure, and manage the use of Microsoft 365 Copilot and agents to maximize their impact. Security & Governance Protecting your organization’s data from oversharing, data loss, and insider risks is crucial for effective agent deployment. Apps and agents in Microsoft Purview Data Security Posture Management for AI (public preview available June 2025) empowers administrators to identify and manage potential data security risks associated with agents. It allows them to understand any sensitive data accessed by agents, identify gaps in data security policies, and take actions to address those gaps—all from a single dashboard. Apps and agents in Data Security Posture Management for AI dashboard Admins can access comprehensive insights about a specific agent and see how the agent is protected by Purview data security policies. The detailed agent view indicates which policies are applied to the agent and provides the option for admins to create a policy directly from this view to address any coverage gaps. Here are some examples of insights related to coverage gaps: Alerts for risky agent interactions: Shows if the agent has received prompts containing sensitive data or risky information. Control inappropriate use of AI agents: Detect any inappropriate or unethical use of the agent. Block sensitive data from AI agent processing: Displays the confidentiality level of data the agent interacts with, providing an option to block certain levels from being accessed by the agent. Additional risk-related insights are also available from the detailed agent view, empowering admins with visibility to make informed decisions about how agents should interact with the organization’s data. AI agent details view Measurement & Reporting Copilot Analytics is our set of reporting capabilities designed to help organizations measure the impact of AI on their business. We’ve created a set of reports to understand agent usage and maximize their effectiveness: Reports for IT admins to help manage agents across the organization and leadership level reporting to assess agent productivity and impact. Understanding agent usage and consumption is fundamental to managing potential sprawl and associated costs. IT and AI admins need visibility into which agents people are using and how they’re being used to effectively manage agents. The Microsoft 365 admin center offers rich, built-in agent usage and consumption reporting that enable admins to monitor agent usage across their organization. Admins can review and understand overall agent usage in Microsoft 365 Copilot Chat including total active users, segmentation by Microsoft 365 Copilot licensed and unlicensed users, total active agents by publisher, and agent and user-level details. The new Message consumption report (targeted for public preview in May 2025) offers visibility into metered consumption of agents in Microsoft 365 Copilot Chat, including total messages consumed, cumulative and daily time series, and user and agent-level details. It provides visibility into how many agent messages are being consumed, helping you understand and manage associated costs. The Agent usage report (targeted for public preview in June 2025) gives admins the ability to review and understand overall agent usage in Microsoft 365 Copilot Chat including total active users, segmentation by Microsoft 365 Copilot licensed and unlicensed users, total active users and active agents by publisher, and agent and user-level details. agents report in the Microsoft 365 admin center Both the Message consumption report and Agent usage report provide powerful insights to inform IT decision making with regards to agent deployment and adoption. Complementing the tools available in Microsoft 365 admin center, the new Copilot Studio Agent Report (now available in public preview) in Viva Insights provides leaders and IT pros with a comprehensive view of Copilot Studio agent usage, performance, and value. In June 2025 we’re adding a new capability to this report to include customizable business impact reporting to correlate agent usage to your specific business metrics—for example, see how your support agents reduce ticket creation, or how your sales agents unlock bigger opportunities—giving you the ability to measure AI ROI within your organization. For more details on these reports, please visit our blog on agent reporting in Copilot Analytics. Management Controls The critical third step to addressing challenges of cost management and security is effective lifecycle management. Microsoft 365 admin center provides admins with the tools they need to manage agent deployment, enable, disable, or block agents for specific users or groups, and ensure that agents are used efficiently and in alignment with organizational needs. Today, agent management is available within Integrated apps and will soon be made available in a dedicated management pane within Copilot Control System. New updates to the agent inventory management layout and workflows (targeted for release in May CY2025) make it easier to see the critical information you need to manage your agent and connectors inventory. Agent management (available Q3 CY25) enables admins to easily make updates and changes to the list of users approved to create and use agents. Similarly, you can see any users who haven't been approved to use your team’s agents, and quickly decide if we want to grant them access or not. of Copilot settings in the Microsoft 365 admin center showing a control to enable/disable users or groups to install and create agents in Microsoft 365 Copilot Chat. With Shared Agent Inventory Management (targeted for release May CY2025) admins view reports and take additional actions on the insights, such as blocking or unblocking agents as appropriate. Admins can also export their agent inventory to CSV and manage agents via PowerShell. ity to block an agent for all users These latest capabilities equip IT administrators and security professionals with the tools you need to support your organization’s AI-powered transformation. We look forward to continuing to provide insights and controls across Copilot Control System that empower you to take informed actions and create a managed, trusted environment in which agent creators—and agent users—can thrive. To learn more about Copilot Control system, watch the demo or read the announcement blog.1.9KViews2likes1CommentBuild AI Agents with Azure Database for PostgreSQL and Azure AI Agent Service
Introduction AI agents are revolutionizing how applications interact with data by combining large language models (LLMs) with external tools and databases. This blog will show you how to combine Azure Database for PostgreSQL with Azure AI Agent Service to create intelligent AI agents that can search and analyze your data. We'll use a legal research assistant as our example and walk through setup, implementation, and testing. With just a few hours of work, you can build an AI solution that would have taken weeks of traditional development. Why AI Agents Matter AI agents can improve productivity by handling repetitive, time-consuming tasks. AI agents can transform how businesses interact with their data by automating complex workflows, providing more accurate information retrieval, and enabling natural language interfaces to databases. What are AI agents? AI agents go beyond simple chatbots by combining large language models (LLMs) with external tools and databases. Unlike standalone LLMs or standard RAG systems, AI agents can: Plan: Break down complex tasks into smaller, sequential steps. Use Tools: Leverage APIs, code execution, search systems to gather information or perform actions. Perceive: Understand and process inputs from various data sources. Remember: Store and recall previous interactions for better decision-making. By connecting AI agents to databases like Azure Database for PostgreSQL, agents can deliver more accurate, context-aware responses based on your data. AI agents extend beyond basic human conversation to carry out tasks based on natural language. These tasks traditionally required coded logic; however, agents can plan the tasks needed to execute based on user-provided context. Agents can be implemented using various GenAI frameworks including LangChain, LangGraph, LlamaIndex and Semantic Kernel. All these frameworks support using Azure Database for PostgreSQL as a tool. This uses the Azure AI Agents Service for agent planning, tool usage, and perception, while using Azure Database for PostgreSQL as a tool for vector database and semantic search capabilities. Real-World Use Case: Legal Research Assistant In this tutorial, we'll build an AI agent that helps legal teams research relevant cases to support their clients in Washington state. Our agent will: Accept natural language queries about legal situations. Use vector search in Azure Database for PostgreSQL to find relevant case precedents. Analyze and summarize the findings in a format useful for legal professionals. Prerequisites Azure Resources An active Azure account. Azure Database for PostgreSQL Flexible Server instance running PG 14 or higher. With pg_vector and azure_ai extensions enabled Azure AI Foundry Project Deployed Azure GPT-4o-mini endpoint. Deployed Azure text-embedding-small endpoint. Local Setup Install Visual Studio Code. Install the Python extension. Install Python 3.11.x. Install Azure CLI.(latest version) Project Implementation All the code and sample datasets are available in this GitHub repository. Step 1: Set Up Vector Search in Azure Database for PostgreSQL First, we'll prepare our database to store and search legal case data using vector embeddings: Environment Setup: If using macOS / bash: python -m venv .pg-azure-ai source .pg-azure-ai/bin/activate pip install -r requirements.txt Windows / PowerShell python -m venv .pg-azure-ai .pg-azure-ai \Scripts\Activate.ps1 pip install -r requirements.txt Windows / cmd.exe: python -m venv .pg-azure-ai .pg-azure-ai \Scripts\activate.bat pip install -r requirements.txt Configure Environment Variables: Create a .env file with your credentials: AZURE_OPENAI_API_KEY="" AZURE_OPENAI_ENDPOINT="" EMBEDDING_MODEL_NAME="" AZURE_PG_CONNECTION="" Load documents and vectors The Python file load_data/main.py serves as the central entry point for loading data into Azure Database for PostgreSQL. This code processes sample cases data, including information about cases in Washington. High level details of main.py: Database setup and Table Creation: Creates necessary extensions, sets up OpenAI API settings, and manages database tables by dropping existing ones and creating new ones for storing case data. Data Ingestion: Reads data from a CSV file and inserts it into a temporary table, then processes and transfers this data into the main cases table. Embedding Generation: Adds a new column for embeddings in the cases table and generates embeddings for case opinions using OpenAI's API, storing them in the new column. The embedding process will take ~3-5 minutes To start the data loading process, run the following command from the load_data directory: python main.py Here's the output of main.py: Extensions created successfully OpenAI connection established successfully Cases table created successfully Temp cases table created successfully Data loaded into temp_cases_data table successfully Data loaded into cases table successfully Adding Embeddings, this will take a while around 3-5 mins... Embeddings added successfully All Data loaded successfully! Step 2: Create Postgres tool for the Agent In this step we will be configuring AI agent tools to retrieve data from Postgres and then using the Azure AI Agent Service SDK to connect your AI agent to the Postgres database. Define a function for your agent to call Start by defining a function for your agent to call by describing its structure and any required parameters in a docstring. Include all your function definitions in a single file, legal_agent_tools.py which you can then import into your main script. def vector_search_cases(vector_search_query: str, start_date: datetime ="1911-01-01", end_date: datetime ="2025-12-31", limit: int = 10) -> str: """ Fetches the cases information in Washington State for the specified query. :param query(str): The query to fetch cases for specifically in Washington. :type query: str :param start_date: The start date for the search, defaults to "1911-01-01" :type start_date: datetime, optional :param end_date: The end date for the search, defaults to "2025-12-31" :type end_date: datetime, optional :param limit: The maximum number of cases to fetch, defaults to 10 :type limit: int, optional :return: Cases information as a JSON string. :rtype: str """ db = create_engine(CONN_STR) query = """ SELECT id, name, opinion, opinions_vector <=> azure_openai.create_embeddings( 'text-embedding-3-small', %s)::vector as similarity FROM cases WHERE decision_date BETWEEN %s AND %s ORDER BY similarity LIMIT %s; """ # Fetch cases information from the database df = pd.read_sql(query, db, params=(vector_search_query,datetime.strptime(start_date, "%Y-%m-%d"), datetime.strptime(end_date, "%Y-%m-%d"),limit)) cases_json = json.dumps(df.to_json(orient="records")) return cases_json Step 3: Create and Configure the AI Agent with Postgres Now we'll set up the AI agent and integrate it with our PostgreSQL tool. The Python file src/simple_postgres_and_ai_agent.py serves as the central entry point for creating and using your agent. High level details of simple_postgres_and_ai_agent.py: Create an Agent: Initializes the agent in your Azure AI Project with a specific model. Add Postgres tool: During the agent initialization, the Postgres tool to do vector search on your Postgres DB is added. Create a Thread: Sets up a communication thread. This will be used to send messages to the agent to process Run the Agent and Call Postgres tool: Processes the user's query using the agent and tools. The agent can plan with tools to use to get the correct answer. In this use case the agent will call the Postgres tool based on the function signature and docstring to do vector search and retrieve the relevant data to answer the question. Display the Agent’s Response: Outputs the agent's response to the user's query. Find the Project Connection String in Azure AI Foundry: In your Azure AI Foundry project you will find you Project Connection String from the Overview page of the project we will use this string to connect the project to the AI agent SDK. We will be adding this string to the .env file. Connection Setup: Add these variables to your .env file in the root directory: PROJECT_CONNECTION_STRING=" " MODEL_DEPLOYMENT_NAME="gpt-4o-mini" AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED="true" Create the Agent with Tool Access We will create the agent in the AI Foundry project and add the Postgres tools needed to query to Database. The code snippet below is an excerpt from the file simple_postgres_and_ai_agent.py. # Create an Azure AI Client project_client = AIProjectClient.from_connection_string( credential=DefaultAzureCredential(), conn_str=os.environ["PROJECT_CONNECTION_STRING"], ) # Initialize agent toolset with user functions functions = FunctionTool(user_functions) toolset = ToolSet() toolset.add(functions) agent = project_client.agents.create_agent( model= os.environ["MODEL_DEPLOYMENT_NAME"], name="legal-cases-agent", instructions= "You are a helpful legal assistant that can retrieve information about legal cases.", toolset=toolset ) Create Communication Thread: This code snippet, shows how to create a thread and message for the agent. The thread and message will be what the agent processes in a run. # Create thread for communication thread = project_client.agents.create_thread() # Create message to thread message = project_client.agents.create_message( thread_id=thread.id, role="user", content="Water leaking into the apartment from the floor above, What are the prominent legal precedents in Washington on this problem in the last 10 years?" ) Process the Request: This code snippet creates a run for the agent to process the message and use the appropriate tools to provide the best result. Using the tool, the agent will be able to call your Postgres and the vector search on the query “Water leaking into the apartment from the floor above”, to retrieve the data it will need to answer the question best. from pprint import pprint # Create and process agent run in thread with tools run = project_client.agents.create_and_process_run( thread_id=thread.id, agent_id=agent.id ) # Fetch and log all messages messages = project_client.agents.list_messages(thread_id=thread.id) pprint(messages['data'][0]['content'][0]['text']['value']) Run the Agent: To run the agent, run the following command from the src directory: python simple_postgres_and_ai_agent.py The agent will produce a similar result as below using the Azure Database for PostgreSQL tool to access case data saved in the Postgres Database. Snippet of output from agent: 1. Pham v. Corbett Citation: Pham v. Corbett, No. 4237124 Summary: This case involved tenants who counterclaimed against their landlord for relocation assistance and breach of the implied warranty of habitability due to severe maintenance issues, including water and sewage leaks. The trial court held that the landlord had breached the implied warranty and awarded damages to the tenants. 2. Hoover v. Warner Citation: Hoover v. Warner, No. 6779281 Summary: The Warners appealed a ruling finding them liable for negligence and nuisance after their road grading project caused water drainage issues affecting Hoover's property. The trial court found substantial evidence supporting the claim that the Warners' actions impeded the natural flow of water and damaged Hoover's property. Step 4: Testing and Debugging with Azure AI Foundry Playground After running your agent with Azure AI Agent SDK, the agent will be stored in your project, and you can experiment with the agent in the Agent playground. Using the Agent Playground: Navigate to the Agents section in Azure AI Foundry Find your agent in the list and click to open Use the playground interface to test various legal queries Test the query “Water leaking into the apartment from the floor above, What are the prominent legal precedents in Washington?”. The agent will pick the right tool to use and ask for the expected output for that query. Use sample_vector_search_cases_output.json as the sample output. Step 5: Debugging with Azure AI Foundry Tracing When developing the agent by using the Azure AI Foundry SDK, you can also debug the agent with Tracing. You will be able to debug the calls to tools like Postgres as well as seeing how to agent orchestrated each task. Debugging with Tracing: Click Tracing in the Azure AI Foundry menu Create a new Application Insights resource or connect an existing one View detailed traces of your agent's operations Learn more about how to set up tracing with the AI agent and Postgres in the advanced_postgres_and_ai_agent_with_tracing.py file on Github. Get Started Today By combining Azure Database for PostgreSQL with Azure AI Agent Service, developers can create intelligent agents that automate data retrieval, improve decision-making, and unlock powerful insights. Whether you're working on legal research, customer support, or data analytics, this setup provides a scalable and efficient solution to enhance your AI applications. Ready to build your own AI agent? Try building your own legal agent with Azure AI agent service and Postgres. 1. Setup Azure AI Foundry and Azure Database for PostgreSQL Flexible Server Setup up an AI Foundry Project and deploy models Deploy the “gpt-4o-mini” model and “text-embedding-small” models Setup Azure Database for PostgreSQL Flexible Server and pg_vector extension Allow the azure_ai extension 2. Run our end-to-end sample AI Agent using Azure Database for PostgreSQL tool 3. Customize for your use case: Replace legal data with your domain-specific information Adjust agent instructions for your specific needs Add additional tools as required Learn More Read more able Azure Database for PostgreSQL and the Azure AI Agent service. Learn more about Vector Search in Azure Database for PostgreSQL Learn more about Azure AI Agent ServiceCreate and share Copilot agents in SharePoint in a few clicks
Go behind the scenes to learn more about Copilot agents in SharePoint, powered by your content. What could be better? Copilot agents are built in Copilot Studio via our new, built-in lightweight agent builder experience in SharePoint. CJ and Karuana discuss the broader vision of Copilot agents and show you how to put this new capability to work within your SharePoint sites in Microsoft 365 - including IT management details and licensing insights. Check out the new Microsoft 365 Copilot agents in SharePoint adoption hub: https://aka.ms/SharePoint/agents. In addition, review the recent, related blog, "Microsoft 365 Copilot Wave 2: AI Innovations in SharePoint and OneDrive" by Adam Harmetz and Jason Moore. And review this short explainer video, "Copilot agents in SharePoint" taken from the "Microsoft 365 Copilot: Wave 2" event with Satya Nadella and Jared Spataro event broadcast - watch now:8.5KViews4likes9CommentsAutomation for a real estate brokerage
I manage a real estate brokerage and am looking for help in how to utilize Microsoft teams better. I would like to have it automate processes for my associates. Example, they are looking to list a property, it will guide them to a tutorial and provide all of the documents that i uploaded. Another example, they want to request a lockbox or signs, it will ask them the relevant information and then put it in my queue. Any help would be appreciated!1.4KViews0likes1CommentMonitoring agent in limbo
Hi I'm having trouble connecting an Azure VM to a new Log Analytics Workspace. It's in error state and i if i try and disconnect it just starts disconnecting but never ends. Extension status for MicrosoftMonitoringAgent is "Unavailable" Tried to uninstall the extension but that fails with "Error" Tried to manually install the agent and but the connection status was still "Error" after giving it the night to get the right status" Anyone got an idea of what to do next?1.4KViews0likes2CommentsMasking MS Teams Service Desk Agents
Hello, I would like my Service Desk agents to be more active answering questions in MS Teams, but what I would like to try and avoid are staff trying to contact those agents directly after the fact. Does anyone know of a way where I can mask a group of MS Teams users in specific channels so that the name of the user is masked (ie shows Helpdesk instead of John Smith). Thx!1.1KViews0likes0Comments