Recent Discussions
Copilot Chat vsus. Microsoft 365 Copilot. What's the difference?
While their names sound similar at first glance, Microsoft 365 Copilot and Microsoft 365 Copilot Chat, they differ in several aspects. And more importantly: one is built on top of the other. What is Copilot Chat (Basic)? First things first. Microsoft 365 Copilot Chat is often simply called Copilot Chat. Copilot Chat (Basic) generates answers based on web content, while Microsoft 365 Copilot (Premium) is also grounded on users' data, like emails, meetings, files, and more. Since early 2025, Microsoft 365 Copilot Chat has been available to all users in organizations, becoming the entry point to AI assistance for many organizations. Copilot Chat (Basic) is the foundational Copilot experience available at no extra cost for everyone with an eligible Microsoft 365 plan, including: Microsoft 365 E3 / E5 Microsoft 365 A3 / A5 Microsoft 365 Business Standard & Business Premium Copilot Chat (Basic) is secured, compliant, and it does not required the full Copilot add-on license. Copilot Chat (Basic) is able to ground responses on: Public web content. Content explicitly shared or work data manually uploaded to the chat by the user. On-screen content or content displayed on-screen in apps like Outlook, Word, Excel, PowerPoint, and OneNote. When it comes to agents, Copilot Chat (Basic) offers these features: You can create your own declarative agents grounded on public web content with Agent Builder. You can use agents built by your org grounded on organizational data with the pay-as-you-go method. There are Microsoft prebuilt agents available like Prompt Coach, however Microsoft premium prebuilt agents like Researcher or Analyst are not included. The screenshot below shows how Copilot Chat looks and highlights its main capabilities. Note the Upgrade button, meaning this is not Microsoft 365 Copilot, but the Copilot Chat (Basic) experience. Note that EDP (Enterprise Data Protection) is available in Copilot Chat (Basic). What is Microsoft 365 Copilot (Premium)? Microsoft 365 Copilot (Premium) is a paid add-on license that builds on top of Copilot Chat and unlocks Copilot's full power. It is available for selected Microsoft 365 plans, including: Microsoft 365 E3 / E5 Microsoft 365 A3 / A5 Microsoft 365 Business Standard & Business Premium With a Microsoft 365 Copilot license, users get everything Copilot Chat (Basic) offers, plus much more: Data grounding: Microsoft 365 Copilot (Premium) includes Copilot Chat grounded on web and/or on user's Microsoft 365 data like emails, meetings, chats, and documents. Office apps: It integrates deeply into Microsoft 365 apps like Outlook, Teams, Word, Excel, and more. The integration includes features like Edit with Copilot allowing Copilot to adjust live your documents or email based on your prompts. Custom agents: It brings the capability to create your own declarative agents grounded in organizational data and/or web data. You can create agent either using Agent Builder or Copilot Studio. MS prebuilt agents: Premium prebuilt agents like Researcher and Analyst are included in Microsoft 365 Copilot (Premium). The screenshot below shows the Copilot chat experience for users who have a Microsoft 365 Copilot license. Note that EDP or Enterprise Data Protection also applies here How can I access Microsoft 365 Copilot Chat? Today, Copilot Chat is accessible via https://m365.cloud.microsoft or https://copilot.cloud.microsoft using your Entra ID (work or school account). One important difference in day-to-day experience: Users with a Microsoft 365 Copilot license typically see Copilot prominently surfaced across Microsoft 365 apps. Users with Copilot Chat only may not see it pinned by default on the Microsoft 365 home page. To improve discoverability, Microsoft 365 Copilot administrators can pin Copilot Chat via the Microsoft 365 admin center, ensuring that users can easily access it without friction. Especially convenient is that if you use the M365 Copilot Chat app on Windows, you can open Copilot using the keyboard shortcut Ctrl + C. What’s the difference? The differences between Copilot Chat and Microsoft 365 Copilot mainly come down to: Licensing Data grounding (web-only vs. personal work data) Integration depth within Microsoft 365 apps I’ve listed the key differences in the comparison below. 👇SolvedMicrosoft 365 Personal Classic subscription account & Copilot integration !
I have a Microsoft 365 Personal Classic subscription account which includes Copilot integration & apparently its built into Word, Excel, PowerPoint, Outlook, & OneNote. However, none of the MS app's that I can see has the side bar or icon to invoke or use Copilot, albeit, Copilot is available as a standalone item...🤔1View0likes0CommentsTop 3 Microsoft Copilot Features You Should Know – April 2026
🚀 Top 3 Microsoft Copilot Features – April 2026 Microsoft Copilot is evolving fast, and keeping up isn’t always easy. That’s why I’ve combined my latest 3 Copilot videos into one short recap, highlighting the top 3 Copilot features that really matter right now 👇 ✅ Practical, real‑world use cases ✅ Focus on productivity in Microsoft 365 ✅ Relevant for knowledge workers, IT pros, and digital workplace teams 🎥 Watch the full video here: 👉 https://youtu.be/W1wzCInpvlQ If you’re working with Microsoft 365 Copilot, Teams, Outlook, or enterprise AI, this overview will help you quickly understand what’s new—and why it matters. 💬 Curious to hear your thoughts: Which Copilot feature is already changing the way you work? #MicrosoftCopilot #Microsoft365 #AIatWork #DigitalWorkplace #Productivity #M365Copilot #EnterpriseIT #CopilotUpdatesCreate Follow-up Actions Directly from Emails Using Copilot: A Smarter Way to Stay on Track
Email has long been the backbone of professional communication, but managing what comes after the email the follow-ups, tasks, and deadlines can quickly become overwhelming. Important action items get buried in long threads, and even the most organized inbox can turn into a to-do list that’s hard to manage. https://dellenny.com/create-follow-up-actions-directly-from-emails-using-copilot-a-smarter-way-to-stay-on-track/Giving AI Agent access to move files between folders
Hi Community, I have built a PDF to Excel reconcilliation AI Agent using Microsoft Copilot that reconciles supplier statements against payables data for our organisation. The agent works well. it reads PDF supplier statements and Excel reports from a SharePoint document library, performs the reconciliation, and produces a structured audit-ready report. However, I am hitting a limitation at the final step. Once the reconciliation is complete, I would like the agent to automatically move the processed supplier statement PDF from its current folder to a subfolder called "reconcilled" within the same SharePoint document library. My question is What is the recommended way to give a Copilot AI Agent the ability to move files between SharePoint folders? Any guidance, documentation links, or examples from others who have implemented similar workflows would be greatly appreciated. Thank you!12Views0likes0CommentsCannot Publish My Agent
Hello I am currently facing an issue publishing an agent for testing purposes in Microsoft Copilot Studio. Despite having the license assigned, the publishing process is not functioning as expected. The pop-up reads, "There are open issues with your agent You currently do not have a user license that allows you to publish in Copilot Studio. Please contact your administrator to upgrade your license or enable the necessary permissions" I have noticed some inconsistencies in how these permissions are applied, a teammate with the same license initially could not access the environment at all. After being granted a Teams license, they were able to access and publish successfully however, the agent is not appearing in Teams. Moreover, there is no option to add a knowledge base, tools or further functionalities to the agent at this time. Could someone clarify the specific requirements for agent publishing? Do I need to contact my administrator to assign a role or do I need to be assigned a completely different license? Additionally, why would two users with identical licenses experience different environment access and visibility results?Solved160Views0likes4CommentsActivate Copilot for Microsoft 365 Business Standard (Mac)
I purchased "Microsoft Copilot for Microsoft 365" license. However, I am unable to activate it on Outlook and OneNote. Word - Available on both Desktop & Website Excel - Available on both Desktop & Website Outlook - Available on Website, NOT on Desktop OneNote - NOT available on both Desktop & Website I tried the following method: Under "Language and time", turned on "Sync across Microsoft 365" under Use my Microsoft 365 setting Uninstall and reinstall the Office 365 Desktop app Turned on Optional Connected Experiences under Privacy in Outlook and OneNote Desktop app Update licenses on Desktop app Any idea why Copilot not appear on some of the apps? Thanks Andy65KViews1like24CommentsErrors in source file retrieval from the knowledge base in copilot agents?
Hi all, I understand that copilot agents are connected to MS Graph, which maps the relationships between all the data stored in your MS 365 tenancy (sharepoint, onedrive files, emails etc). Recently, I created an agent and assigned a specific folder to the knowledge base and turned off the "use web content" toggle, because I wanted the responses to be very directly tailored to my folder (inclu. sub-folders with multiple files). I then tested if/how well the agent retrieved specific files using this prompt: "Can you please tell me how many files are in this folder and list the files in the folder? [Insert link to sub-folder in from the main folder in the knowledge base]" The agent responded with (1) an incorrect count and (2) listed a few files that were not in the sub-folder but in another part of the knowledge base. As I understand it, it is a counting error in (1) and retrieval+indexing error in (2). I'm more concerned about (2) because I'm worried the agent isn't retrieving (and therefore, using the info in) all the files in an important folder (when specifically linked to it even). Questions: (a) Where is this error happening in the indexing process within MS graph? Am I misunderstanding where the error lies? Any ideas on why an agent is naming the wrong files in a folder within its own knowledge base?? (b) Do agents created within the copilot agents web interface use Azure AI Search for semantic indexing or is that only for more custom RAG solutions created "from scratch" using foundry, SDK, etc? Do copilot agents use Microsoft Search to query and index files used in a response? Thanks!158Views0likes1CommentDesigning a Governed RTO Compliance Agent Using Copilot Studio and Databricks Genie
Enterprise AI adoption in HR scenarios comes with a unique challenge: how do you deliver actionable insights without compromising privacy, trust, or policy boundaries? In this blog, I’ll share how we built an RTO (Return‑to‑Office) Compliance Agent using Microsoft Copilot Studio and Databricks Genie, focusing on governance‑first design, controlled data access, and real‑world enterprise constraints. This solution was developed as part of an HRLT proof‑of‑value initiative and is designed to support people managers with clear, aggregated compliance insights, delivered conversationally inside Microsoft Teams. The Problem We Were Solving As hybrid work models mature, organizations need a reliable way to answer questions such as: How compliant is my team with RTO expectations? Are there trends across regions or time periods? Traditional dashboards often fall short because they: Require manual interpretation Expose too much granular data Are difficult to govern at scale Our objective was to create an AI‑powered conversational interface that provides: Only manager‑authorized, aggregated insights Zero visibility into individual‑level behavior Built‑in enforcement of HR and privacy policies Architecture Overview The solution integrates Copilot Studio with Databricks Genie, backed by curated data sources. (Image: High-level Copilot Studio and Databricks Genie architecture) Key Components Copilot Studio – Conversational orchestration, policy enforcement, and Teams deployment Databricks Genie – Governed natural-language interface to curated datasets RokFusion Platform – Trusted HR and badge-swipe data This layered approach ensures governance is applied before data is ever queried. Controlled End-to-End Data Flow The interaction pattern follows a strict, auditable flow: A manager asks a question in Copilot Studio Copilot forwards the request to Genie with instruction constraints Genie executes logic only on curated, approved tables Calculations are performed at team or manager level only Copilot formats and returns compliant responses (text, tables, or charts) At no point are employee IDs, badge events, or individual metrics exposed. Using Genie as a Governance Layer, Not Just a Query Tool One of the most critical decisions was to treat Databricks Genie as a policy‑enforcement layer, not merely a natural‑language SQL generator. (Image: Genie instruction configuration enforcing compliance rules) What We Configured in Genie Synonyms and NL mappings for HR terminology Strict filtering logic for employee categories Population threshold enforcement (minimum count) Explicit rejection of sensitive attributes such as gender, race, religion, or age Prevention of formula or row‑level data exposure This approach ensured that even malformed or risky prompts could not bypass policy constraints. Compliance Scenarios Supported The agent supports multiple business‑aligned interpretations of RTO compliance: Hybrid Compliance Hybrid employees counted only on eligible hybrid days Onsite Compliance Onsite employees counted across standard working days All Employees View Weighted aggregation combining hybrid and onsite logic These scenarios are embedded into the agent’s instruction logic, not dynamically inferred at runtime—ensuring consistency and auditability. Why We Chose Conversational AI Over Dashboards A key insight early on was that managers don’t want spreadsheets—they want answers. Instead of navigating filters and charts, managers can ask: “What was my team’s compliance last week?” “Show me a comparison across regions.” When required, the agent can also render simple visual outputs. (Image: Sample Microsoft Teams output with compliance visualization) Importantly, visuals follow the same governance rules as text responses. Publishing and Validation in Microsoft Teams Once configured, the agent was published directly from Copilot Studio to Microsoft Teams, making adoption frictionless. (Image: Publishing Copilot Studio agent to Microsoft Teams) End‑to‑end testing validated: Authorization boundaries Population rules Safe handling of incomplete or ambiguous queries Key Engineering Learnings Governance must be instruction‑driven Relying on frontend filtering alone is insufficient for HR data. Natural language needs strong guardrails Enterprise AI benefits from being constrained, not free‑form. Aggregation builds trust Managers are more comfortable with insights when they know individual visibility is impossible. Copilot Studio accelerates enterprise delivery Security, deployment, and integration stay within the Microsoft ecosystem. Closing Thoughts This RTO Compliance Agent demonstrates how Copilot Studio and Databricks Genie can be used to build governed, enterprise‑ready AI solutions—especially in sensitive domains like HR. By embedding policy into architecture, instructions, and data access, we were able to deliver: Useful insights Strong privacy guarantees High user trust This pattern is extensible well beyond RTO—opening the door for future HR intelligence use cases built on the same foundation.62Views1like1CommentMicrosoft 365 Copilot Licensing Shake-Up Explained
Microsoft is rapidly reshaping its entire productivity ecosystem around artificial intelligence, and at the center of this transformation is Microsoft 365 Copilot. What started as a premium AI add-on is now evolving into a layered, tiered AI model embedded across Microsoft 365 Business and Enterprise plans. https://dellenny.com/microsoft-365-copilot-licensing-shake-up-explained/90Views0likes0CommentsMicrosoft Copilot in Outlook: Automate Calendar Instructions & Meetings with AI
🚀 New Copilot Feature in Outlook: Calendar Instructions Microsoft Copilot in Outlook just got smarter. With Calendar Instructions, you can now teach Copilot how you want your meetings handled — and let AI do the heavy lifting. ✅ Set personal scheduling rules ✅ Let Copilot create, move, or update meetings for you ✅ Reduce back‑and‑forth and calendar overload ✅ Stay fully aligned with your preferences In my latest video, I walk through: What Calendar Instructions are How they work in real scenarios Why this is a big step forward for AI‑powered productivity in Microsoft 365 🎥 If you use Outlook daily, this is a feature you don’t want to ignore. 👉 Would you let Copilot manage your calendar automatically? Curious to hear your thoughts 👇 https://youtu.be/iOcwSAHfM0Q #MicrosoftCopilot #Microsoft365 #Outlook #AIProductivity #FutureOfWork #Copilot #M365 #Automation #OutlookTipsHas anyone had issues with images not loading properly/at all?
Starting last week, the Copilot application has not been properly rendering generated images for several users - I have pasted several examples below, along with the errors I am receiving in various tools. Has anyone had a similar issue and/or been able to resolve it on their end? Is it possible to disable Copilot in Agent mode (i.e., return to Work mode)? Thank you! Please let me know if I can clarify with any additional details.37Views0likes0CommentsCopilot Studio Knowledge Source Limitation When Iterating Over Multiple SharePoint Documents
Hi, I’m looking for clarification on a limitation we’re currently encountering in Copilot Studio that is blocking some of our use case. Example Scenario (Policy Agent) We have a SharePoint document library containing ~100 policy documents. A Copilot Studio agent is configured with this library as a knowledge source. The agent performs well for typical question-answering scenarios where responses can be derived from a subset of documents. For example: “How much annual leave can I take?” correctly returns answers sourced from multiple relevant policies. Issue When the question requires the agent to evaluate all documents individually, the results are incomplete. Example prompt: “Review each policy document and return the review date.” In this scenario: The agent only processes the first ~10 documents. It then stops, without indicating that the response is partial or that a limit has been reached. The remaining documents in the library are not evaluated. During a recent Microsoft-led course, we were advised that this behaviour is expected due to platform limitations. Specifically: While it will reside over all documents to genereate the most suitable response, the agent is not designed to self‑iterate across all items in a large knowledge source for individual document responses. Asking it to “review each document” effectively requires iteration, which is constrained. The suggested workaround was to: Create a trigger-based flow Implement a loop to process the documents in batches We were able to make this approach work, but it feels like a heavy and brittle workaround for what seems like a common enterprise requirement. We’ve Tried Both available SharePoint knowledge source connection methods Allowing sufficient time for indexing and refresh Rephrasing prompts to encourage broader coverage None of these approaches changed the outcome, the agent consistently returns results for only the first subset of documents. Is this behaviour a documented or known limitation of Copilot Studio knowledge sources? Are there recommended design patterns for scenarios that require document-by-document evaluation at scale? Is there a more native or supported approach planned to avoid custom looping logic for this kind of use case? Any guidance or confirmation would be appreciated. Thanks.247Views0likes4CommentsAutomating the feedback collection process before reviews. Is it possible with Copilot?
Every quarter, our HR team manually emails people asking for peer feedback before reviews start. Its a huge time sink and half the responses come in late. Someone mentioned Copilot agents might be able to handle this, like automatically reaching out to the right people, collecting responses, and organizing them. Has anyone tried building something like this? Or is there a simpler way to automate 360 feedback collection in M365?16Views0likes0CommentsCopilot pulling last weeks 1:1 action items into this weeks agenda, possible?
Spent 3 hours yesterday trying to get Copilot to pull open action items from last weeks 1:1 OneNote page and turn them into this weeks agenda. Gave up. Is this a prompt problem or just not possible with the current connectors?35Views0likes0CommentsGot Copilot to use goal progress when drafting review feedback?
So I've been trying to experiment with how I use Copilot for performance management, reviews, goals, etc. When I manually feed goals, feedback results, etc into Copilot, it drafts pretty decent performance reviews. Now Im curious. Can I automatically feed these into copilot from any HR platform with API's or something? So it automatically views goal progress and drafts review feedback? I know if Viva Goals was still available, I would be able to do something along these lines. Anyone tried this?28Views0likes0CommentsEvent-Driven Architectures for Agentic Systems: Building Responsive, Intelligent, and Scalable AI
As software systems evolve toward greater autonomy, the rise of agentic systems systems composed of intelligent agents capable of making decisions and acting independently has created new architectural challenges. Traditional request-response models often fall short when dealing with dynamic, real-time decision-making environments. This is where Event-Driven Architecture (EDA) becomes not just useful, but essential. https://dellenny.com/event-driven-architectures-for-agentic-systems-building-responsive-intelligent-and-scalable-ai/63Views0likes0CommentsShort survey: Feedback on Sensitivity Label Suggestions in Microsoft 365 Apps
Hi everyone, I’m looking to gather feedback on user experiences with Sensitivity Label suggestions in Microsoft 365 apps. This short survey aims to understand how label recommendations are working in practice and where improvements may be needed. Your responses will help identify common challenges and opportunities to make the label recommendation process more accurate, useful, and seamless for users. Survey link: Experience with Recommended Sensitivity Labels in Microsoft 365 – Fill out form The survey takes around 3 minutes to complete. Your feedback will directly help us better understand real-world experiences with label suggestions. Thank you very much for taking the time to contribute.38Views0likes0CommentsM365 Copilot Agents – Credit Consumption Clarification
Scenario 1: User with M365 Copilot license creates an agent inside M365 Copilot app and uses it himself. However, it still consumes Copilot credits from the Copilot capacity pack in Power Platform. When does this happen? Which actions trigger credit usage? As per the documentation, this should not consume credits: https://learn.microsoft.com/en-us/microsoft-copilot-studio/requirements-messages-management#copilot-credits-billing-rates Scenario 2: User with M365 Copilot license creates an agent inside M365 Copilot app and shares it with a user who does not have a Copilot license. Will all interactions from the unlicensed user consume credits? Trying to clearly understand the boundary between included usage vs metered (credit-based) usage.137Views0likes1Comment
Events
Recent Blogs
- Enable agent‑to‑agent collaboration using Copilot’s intelligence layer—secure, grounded, and developer‑ready.Apr 30, 20261.8KViews3likes0Comments
- Welcome to the Apr 2026 edition of What's New in Microsoft 365 Copilot!Apr 30, 20266.6KViews3likes1Comment