knowledge grounding
2 TopicsCopilot Studio Knowledge Source Limitation When Iterating Over Multiple SharePoint Documents
Hi, I’m looking for clarification on a limitation we’re currently encountering in Copilot Studio that is blocking some of our use case. Example Scenario (Policy Agent) We have a SharePoint document library containing ~100 policy documents. A Copilot Studio agent is configured with this library as a knowledge source. The agent performs well for typical question-answering scenarios where responses can be derived from a subset of documents. For example: “How much annual leave can I take?” correctly returns answers sourced from multiple relevant policies. Issue When the question requires the agent to evaluate all documents individually, the results are incomplete. Example prompt: “Review each policy document and return the review date.” In this scenario: The agent only processes the first ~10 documents. It then stops, without indicating that the response is partial or that a limit has been reached. The remaining documents in the library are not evaluated. During a recent Microsoft-led course, we were advised that this behaviour is expected due to platform limitations. Specifically: While it will reside over all documents to genereate the most suitable response, the agent is not designed to self‑iterate across all items in a large knowledge source for individual document responses. Asking it to “review each document” effectively requires iteration, which is constrained. The suggested workaround was to: Create a trigger-based flow Implement a loop to process the documents in batches We were able to make this approach work, but it feels like a heavy and brittle workaround for what seems like a common enterprise requirement. We’ve Tried Both available SharePoint knowledge source connection methods Allowing sufficient time for indexing and refresh Rephrasing prompts to encourage broader coverage None of these approaches changed the outcome, the agent consistently returns results for only the first subset of documents. Is this behaviour a documented or known limitation of Copilot Studio knowledge sources? Are there recommended design patterns for scenarios that require document-by-document evaluation at scale? Is there a more native or supported approach planned to avoid custom looping logic for this kind of use case? Any guidance or confirmation would be appreciated. Thanks.50Views0likes3Comments