microsoft 365 copilot
138 TopicsIgnite Recap: How Copilot Search and Connectors Are Redefining Productivity
Ready to see how AI is transforming the way you work? At Microsoft Ignite, Microsoft 365 Copilot Search and Connectors took center stage - ushering in a new era of productivity and intelligent search.129Views0likes0CommentsNew! Centralized Agent Dashboard and Enhanced Reporting
Track Adoption Trends and Export Insights with Copilot and Agent Analytics At Ignite 2025, we unveiled key updates to Copilot and Agent Analytics, introducing new features designed to address a critical challenge facing organizations in their AI transformation journey: the need for actionable, transparent insights into Copilot and agent impact and adoption. These enhanced Copilot and Agent Analytics features empower IT, business leaders, and analysts to track usage trends, adoption benchmarks, and agent performance across Microsoft 365 Copilot and Copilot Chat, all through intuitive dashboards and robust reporting tools. And by enabling customers to export data for custom analysis in their own systems, Copilot and Agent Analytics gives organizations the flexibility needed to optimize AI deployment, measure impact, and drive data-driven decisions at scale. The latest Copilot and Agent Analytics updates introduce the following key tools and features: Copilot Chat Reporting: Track adoption trends, retention, and group comparisons for Copilot Chat and Microsoft 365 Copilot in the Copilot Dashboard. Copilot Benchmarks: See how your organization’s Copilot adoption stacks up internally and against peer companies to assess where you stand with AI transformation. Agent Dashboard: A centralized view to measure agent adoption across your organization, identify your top performing agents, and deep dive into individual agent performance. Readiness Recommendations: Get tailored guidance to maximize Copilot, Copilot Chat, and agent utilization in a revamped readiness page in the Copilot Dashboard. Export Functionality: Download Copilot Dashboard data as a CSV file for custom reporting and deeper analysis. Copilot Analytics in Graph API: Access Copilot Analytics data to create custom reports and connect insights with your internal tools and datasets. Together, these features give business leaders and analysts the clarity needed to optimize AI adoption and measure impact across their organizations. Copilot Chat reporting The Copilot Dashboard consolidates adoption metrics for both Microsoft 365 Copilot and Copilot Chat, giving business leaders a unified view of how these tools are used across their organization. Customers can use the insights from Copilot Chat reporting to assess overall AI adoption, improve enablement efforts for groups that are lagging, and optimize Microsoft 365 Copilot licensing by identifying your most active groups. Instantly see which apps and surfaces employees use to interact with Copilot Chat and compare usage frequency and retention across different groups. Drill down to understand monthly and weekly trends and identify groups with the highest engagement. Dive deeper into Copilot Chat trends by tracking monthly and weekly user intensity and retention, identifying which groups are most engaged, and spotting opportunities to expand adoption. These insights help organizations understand how Copilot Chat is driving productivity, where usage is strongest, and where additional support or licensing could have the greatest impact. This update is Generally Available for customers with access to the Copilot Dashboard and 50+ M365 Copilot licenses. Copilot Benchmarks Business leaders want to know how they compare to others with AI transformation. Benchmarks in the Microsoft Copilot Dashboard put your adoption trends in context, so you can compare usage across internal cohorts and see how you stack up against similar organizations. Individual data is not shared and is only presented in aggregate at the group level, ensuring privacy. With side‑by‑side views for the percentage of active Copilot users, adoption by app, and returning user rates, leaders get a clear signal of where Copilot is taking hold and where to focus enablement next. External views highlight performance against the top 10% and top 25% of peer companies as well as overall benchmarks, bringing needed clarity to your rollout goals. Internal benchmarks use attributes like job function and region to provide relevant comparisons within your organization, while external benchmarks aggregate anonymized data from peer groups to ensure privacy. This dual approach gives leaders actionable context for setting goals and measuring progress. With these insights, organizations can identify success stories, target training, and drive Copilot engagement where it matters most. Copilot Benchmarks is Generally Available, making it easier to operationalize data-driven adoption strategies. Agent Dashboard Agents are quickly becoming the next wave of AI innovation for organizations, taking on specific business processes, and increasing business capacity. Agents can come from a variety of sources and leadership wants one place to understand how agents are transforming their business. The Agent Dashboard provides a single, centralized view for tracking agent activity and adoption across your organization. Leaders, AI adoption specialists, and analysts can instantly monitor key metrics — including active agents, user engagement, responses, usage retention, and shares — all in one place. The Agent Dashboard highlights top-performing agents based on active agents, active user count, and agent responses, making it easy to identify which agents are driving the most impact. The single view makes it easy to compare how different groups are adopting agents within their workflows, increasing productivity and driving AI innovation in your organization. With top agent highlights you can learn about the top agents being used in your organization based on active user count, agent responses, and Copilot Credits consumed, while intuitive drilldowns give you the ability to dive deeper into each agent to learn more. The initial release supports agents within Microsoft 365 Copilot, with plans to expand support to additional agent types and metrics in future updates. Private preview starts in December, with Public Preview and General Availability to follow in early 2026. Revamped Readiness report for IT leaders Improve AI utilization with administrative recommendations on the Readiness report in Copilot Dashboard. The updated Readiness page provides IT leaders with tailored insights and step-by-step recommendations to maximize the use of M365 Copilot, Copilot Chat and agents. IT leaders can see guidance to adjust Copilot settings to streamline deployment, enhance user experience, and maximize the use of Copilot. This feature rolled out in Private Preview in November. Export data from Copilot Dashboard Exporting Copilot Dashboard data is simple and fast—just download usage metrics as a CSV file for tailored reporting and deeper analysis. Leaders, analysts, and admins with global access can quickly extract weekly Copilot metrics from the past six months, enabling tailored insights and trend analysis. This export capability empowers organizations to create advanced reports, track adoption patterns, and support transformation initiatives with data-driven decisions. This feature rolled out in Private Preview in November, with Public Preview to follow this month. Copilot Analytics in Graph API Unlock deeper insights with Copilot Analytics endpoints now available in Microsoft Graph API, allowing users to build custom reports and integrate Copilot data seamlessly with internal tools and datasets. Analysts can access the same robust metrics found in the Microsoft 365 Copilot usage report, including tenant-level counts of enabled and active users, as well as each user’s last activity date across Microsoft 365 apps. As Copilot Analytics continues to evolve, expect a growing set of metrics and datasets designed to help you drive data-driven decisions and maximize the value of your AI investments. Conclusion and resources Copilot and Agent Analytics delivers unified, powerful platforms for organizations to accelerate their AI transformation. With the latest releases, leaders and analysts can now track adoption, compare performance, and create custom reports with ease. These tools bring together actionable insights across Microsoft 365 Copilot, Copilot Chat, and agents, empowering every organization to optimize deployment, measure impact, and drive smarter, data-driven decisions. As Copilot and Agent Analytics continues to evolve, its expanding support for agent types and metrics will further enable organizations to maximize the value of Copilot, foster innovation, and confidently navigate the future of work. Below are additional resources to help you on your journey: On-demand Copilot and Agent Analytics Customer Hub series: Driving business value with Microsoft 365 Copilot Copilot and Agent Analytics Whitepaper: Unlocking Business Impact with Microsoft 365 Copilot: Turning AI Adoption into Measurable Value Copilot and Agent Analytics Learn Articles: Copilot Analytics introduction and quick-start guide | Microsoft Learn5.5KViews0likes1CommentAnalyst agent in Microsoft 365 Copilot
Xia Song, CVP, Microsoft 365 Engineering As large language models (LLMs) and multimodal systems revolutionize information work by seamlessly navigating language, code, vision, and voice, a vast domain of structured, tabular data remains underutilized: Excel sheets, databases, CSV files, and Power BI reports often lack the natural intuitiveness of text or images. Picture a project manager urgently seeking quarterly performance insights scattered across multiple Excel worksheets and a badly formatted table inside a presentation. Some metrics are hidden in the middle of a worksheet, while some TSV files use commas instead of tabs—leaving little guidance on which data matters or how it connects. For those unskilled in data wrangling, this scenario can devolve into hours of frustration or missed insights. Yet armed with the know-how to manipulate data and harness code as a tool, one can swiftly unravel such complexity, extracting pivotal information and gaining a critical competitive edge. But what if everyone had this capability readily available? That’s precisely the motivation behind the launch of Analyst, one of the first reasoning agents of its kind in M365 Copilot. Powered by our advanced reasoning model, post-trained on OpenAI o3-mini on analytic tasks, Analyst acts as your “virtual data scientist”. This reasoning-powered agent is built directly into Microsoft 365, placing sophisticated data analytics capabilities right at your fingertips. The Era of Progressive Reasoning and Problem Solving Traditional LLMs have historically jumped too quickly from problem to proposed solution while often failing to adjust to new complexities or gracefully recover from mistakes. The advanced reasoning model behind Analyst changes this by implementing a reasoning-driven, chain-of-thought (CoT) architecture derived from OpenAI’s o3-mini. Instead of providing quick answers, it progresses through problems iteratively by hypothesizing, testing, refining, and adapting. Analyst takes as many steps as necessary, adjusts to each complexity it encounters, and mirrors human analytical thinking. With the capability to generate and execute code at every step within its reasoning trajectory, this model excels at incremental information gathering, constructing hypotheses, course correction, and automatic recovery from errors. Real-World Data is Messy: A Case Study Real-world data is messy. To illustrate the tangible benefits of the model’s reasoning capabilities, let's consider a practical challenge. Imagine you're presented with two datasets: Dataset A: An Excel file with multiple sheets containing data on world internet usage, where the critical data isn’t conveniently located at the top left but located somewhere in the middle of the second sheet. Dataset B: A .tsv file containing data on country statistics, presumably tab-delimited, but mis-formatted with commas as delimiters due to an export error. The task at hand? Vague at best—something like, “Help identify and visualize interesting insights and connections between these two datasets”. Most of the traditional tools and existing models struggle here. They either stall entirely or deliver incomplete or incorrect analyses. However, when faced with precisely this scenario, Analyst demonstrates remarkable resilience: It quickly identifies and navigates directly to relevant data hidden in the middle of an Excel sheet. Shows curiosity, discovers then lists the sheet names. Gracefully detects and corrects delimiter issues in the second dataset. Progressively explores the data through iterative hypothesis-testing steps, constructing actionable insights without explicit guidance. As a result of the progressive problem solving shown, the model handles these complexities smoothly and provides observations, insights and visualizations all by itself, demonstrating transformative potential in real-world analytic tasks. How It Learns: Reinforcement Learning, Structured Reasoning, and Dynamic Code Execution The effectiveness of the advanced reasoning model behind Analyst lies largely in reinforcement learning (RL). Built by post-training OpenAI’s o3-mini model, it employs advanced RL coupled with rule-based rewards to handle extensive reasoning paths, incremental information discovery, and dynamic code execution. We’ve observed that model performance consistently improves with more reinforcement learning compute during training, as well as more deliberate thinking during inference. Analyst takes advantage of STEM and analytical reasoning optimizations introduced by models like o3-mini, excelling in structured data scenarios. It dynamically writes, executes, and verifies Python code within a controlled execution environment. This iterative cycle enables the model to continually adjust its strategy through course corrections and effective recovery from errors, emulating human problem-solving behavior closely. Data Diversity and Robust Reward Training data diversity is fundamental to post-training effectiveness. We built extensive datasets that encompass a wide range of real-world enterprise scenarios and structured data types: File types: Excel, CSV, TSV, JSON, JSONL, XML, SQLite databases, PowerPoint presentations, etc. Similarly, the task variety ranged from straightforward numerical computations and visualizations to exploratory hypotheses construction and prediction. The data points used in training were carefully constructed and selected to represent authentic complexity, preventing our model from overfitting any particular task or benchmark. Recognizing the "reward hacking" behavior often observed in reinforcement learning systems that can potentially lead to model capability loss, we refined our reward systems via adopting more advanced and robust graders. This meticulous data selection, combined with rigorous task design, was done to ensure genuine reasoning by incentivizing authentic exploration and accurate outcomes. Results The following benchmark results further underscore our model’s strengths on rigorous analytics-focused tasks like those in DABstep benchmarks and internal M365 Copilot comparisons. DABStep (Data Agent Benchmark for Multi-step Reasoning) DABStep is a rigorous evaluation suite designed to test AI agents on real-world data analysis and reasoning tasks. It consists of 450+ structured and unstructured tasks, split into Easy and Hard categories. The Easy set involves simpler data extraction and aggregation, while the Hard set requires multi-step reasoning, integration of diverse datasets, and domain knowledge. When benchmarked against DABStep, our model demonstrated overall state-of-the-art performance among known baselines. It showed excellent capability on both simple and complex tasks, with a substantial lead in the latter category. Note: The M365 Copilot Analyst Model currently appears in the real-time leaderboard as an unvalidated anonymous submission labeled "Test1". We have contacted the DABStep team to update the submission and reflect this as the Analyst model coming from Microsoft. Product Benchmarks While academic benchmarks provide valuable insights, the true measure of a model’s value lies in its practical application within real-world scenarios. We benchmark our model’s performance on enterprise data analysis tasks across diverse business documents, including Excel spreadsheets, CSVs, PDFs, XMLs, and PowerPoint files, reflecting common analytical workflows within the M365 suite. We compare the specialized Analyst agent against the mainline M365 Copilot Chat (without deep reasoning), evaluating their accuracy in insight generation, data interpretation, and structured query execution across various enterprise file formats. Analyst is powered by the advanced reasoning model which consistently outperforms existing approaches, demonstrating not only incremental but transformative improvements in real-world analytic reasoning. The Road Ahead: Opportunities and Acknowledgements We are genuinely excited about what Analyst can unlock for Microsoft 365 users in making advanced data analytics capabilities accessible to every user. Yet we remain conscious of current limitations, recognizing plenty of room for further improvement. Opportunities remain for more seamless integration across applications, improved interaction paradigms, and expanded model capabilities to handle an even broader spectrum of analytic scenarios. We are committed to continuous improvement of Analyst and the underlying model, listening closely to user feedback, and refining our model and its integration with other products. Our ultimate goal remains clear: to empower users and organizations to achieve more, turning everyday information workers into empowered analysts with a “virtual data scientist” at their fingertips. For additional details on Analyst, including rollout and availability for customers, please also check out our blog post highlighting reasoning agents within M365 Copilot and more. References: Introducing Researcher and Analyst in Microsoft 365 Copilot OpenAI o3-mini Learning to reason with LLMs DABStep: Data Agent Benchmark for Multi-step Reasoning61KViews13likes12Comments