Event banner
SharePoint Agents AMA
Event Ended
Thursday, Jan 09, 2025, 09:00 AM PSTEvent details
Join us for an exclusive Ask Me Anything (AMA) session focused on Agents in SharePoint on Thursday, January 9th, from 9:00 AM to 10:00 AM PT.
This session will cover pivotal topics such...
Sarah_Gilbert
Updated Jan 06, 2025
jose_mena
Jan 09, 2025Copper Contributor
When developing a RAG system on your own, many things can go wrong: format of the files including images or tables, or scanned pages that prevent a nice parsing of the information, errors in the retrieval, errors in the generation of the responses, differences between the expected level of detail of such responses from the user's perspective, hallucinations.
How do deal with these difficulties? how do you enable the owner of the site to evaluate the performance of the agent? how can she curate the documentation that is exposed through the agent so the responses are properly grounded?
- cjtanJan 09, 2025
Microsoft
We are continuously improving the quality of responses from the SharePoint agents based on how customers are using and the feedback that is given for responses. Be sure to use the Thumbs up/down buttons in the message response window when you see something that works or doesn't work as expected. As an administrator, the information from this feedback can be viewed through Microsoft Admin Center, as well.
Because SharePoint agents have a configuration for "Instructions", you can direct the agent more specifically with instructions to help make the responses more predictable; for example, directions that instruct agent not to use its latent knowledge and only use the selected circumstances. You can use the File Statistics on the .agent file to understand if the agent is being used.
- jose_menaJan 09, 2025Copper Contributor
Thanks for the reply. So to understand how the agent is performing I have to look at the feedback provided by the users with the thumbs up/down. Do you plan to add other metrics for evaluating this performance in a more offline way, like metrics for evaluating the faithfulness or groundedness of the responses and the documents, or the relevancy of the responses regarding the questions posed?