Forum Discussion

leespringett's avatar
leespringett
Copper Contributor
Apr 24, 2026

Copilot Studio Knowledge Source Limitation When Iterating Over Multiple SharePoint Documents

Hi,

I’m looking for clarification on a limitation we’re currently encountering in Copilot Studio that is blocking some of our use case.

Example Scenario (Policy Agent)

  • We have a SharePoint document library containing ~100 policy documents.

  • A Copilot Studio agent is configured with this library as a knowledge source.

  • The agent performs well for typical question-answering scenarios where responses can be derived from a subset of documents.

    • For example: “How much annual leave can I take?” correctly returns answers sourced from multiple relevant policies.

Issue

When the question requires the agent to evaluate all documents individually, the results are incomplete.

Example prompt:

“Review each policy document and return the review date.”

In this scenario:

  • The agent only processes the first ~10 documents.

  • It then stops, without indicating that the response is partial or that a limit has been reached.

  • The remaining documents in the library are not evaluated.

During a recent Microsoft-led course, we were advised that this behaviour is expected due to platform limitations. Specifically:

  • While it will reside over all documents to genereate the most suitable response, the agent is not designed to self‑iterate across all items in a large knowledge source for individual document responses.

  • Asking it to “review each document” effectively requires iteration, which is constrained.

  • The suggested workaround was to:

    • Create a trigger-based flow

    • Implement a loop to process the documents in batches

We were able to make this approach work, but it feels like a heavy and brittle workaround for what seems like a common enterprise requirement.


We’ve Tried

  • Both available SharePoint knowledge source connection methods

  • Allowing sufficient time for indexing and refresh

  • Rephrasing prompts to encourage broader coverage

None of these approaches changed the outcome, the agent consistently returns results for only the first subset of documents.


 

  • Is this behaviour a documented or known limitation of Copilot Studio knowledge sources?

  • Are there recommended design patterns for scenarios that require document-by-document evaluation at scale?

  • Is there a more native or supported approach planned to avoid custom looping logic for this kind of use case?

 

Any guidance or confirmation would be appreciated.

Thanks.

3 Replies

  • Rajesh_Gurusamy's avatar
    Rajesh_Gurusamy
    Copper Contributor

    Hi leespringett​,

    I’ve run into this same issue. Instead of building a complex workaround, what I’d suggest is adding the 'Review Date' as a custom column directly in your SharePoint library.

    Thanks!

    By,
    Rajesh G

  • Tyler3412's avatar
    Tyler3412
    Copper Contributor

    This is a known limitation in Microsoft Copilot Studio, it’s optimized for retrieval, not full iteration across large SharePoint sets. So, instead of forcing it, use a hybrid pattern with Microsoft Power Automate to batch-process documents and store extracted metadata in a structured store such as Dataverse, then have the agent query that indexed data for complete results.

  • leespringett's avatar
    leespringett
    Copper Contributor

    Just to add, this was back in December so aware more information now like this https://learn.microsoft.com/en-us/microsoft-copilot-studio/guidance/retrieval-augmented-generation which basically confirms but interested to still hear others experiences.

     

    Thanks