Forum Discussion
Copilot chat: Does (dis)like expose data?
We have some questions/concerns from customers about the (dis)like functionality in Copilot Chat.
The main question is: "What happens with the data/chat when using this functionality? And can we turn it of?".
Let's break it down in some sub questions:
1.) What happens when you use this? Does it train the model? Adjust it responses to the user?
2.) MS will use it as feedback. In what way? Do they read the chat and/or content from to analyze the issue?
3.) Is it possible to disable this functionality for all users in the tenant? (Copilot Studio allows this for a custom agent).
4.) Is the behavior different for License and non-licensed users? (work vs web)
This is mainly a security concern, because some customers don't allow data to leave the tenant or don't want to have risk for accidental data leaks.
2 Replies
- RichAICopper Contributor
Hi Yarrick ,
Here's my POV!
For 1 & 2. When a user clicks on dislike/like button, it is captured as feedback telemetry which does not directly train the foundation model. It is used by Microsoft to improve response quality over time and identify problematic responses. When the feedback is submitted, it can include Conversation context (some metadata like Conversation ID).
3. In custom agents built via Microsoft Copilot Studio, you can go to Copilot Studio, select the agent where you wish to turn the user feedback off. Go to Settings -> Generative AI -> User feedback section. Turn the toggle button off next to "Collect user reactions to agent messages". This can be one of the approach to ensure compliance.
4. For Licensed users, the feedback is handled under enterprise-grade protections while for unlicensed, the data may be used for service improvement.
Note: You can monitor and reduce the risk of sensitive data exposure by using Data Loss Prevention (DLP) policies in Microsoft Purview. However, DLP does not disable the functionality at a tenant level completely. It works as a governance and protection layer.
- RichAICopper Contributor
Hi Yarrick,
Here is my POV.
1 & 2. When a user clicks on dislike/like button, it is captured as feedback telemetry which does not directly train the foundation model. It is used by Microsoft to improve response quality over time and identify problematic responses. When the user feedback is submitted, it can include Conversation metadata like Conversation ID.
3. In custom agents built via Microsoft Copilot Studio, you can go to Copilot Studio, select the agent where you wish to turn the user feedback off. Go to Settings -> Generative AI -> User feedback section. Turn the toggle button off next to "Collect user reactions to agent messages".
This can be one of the approach to ensure compliance.
4. For Licensed users, the feedback is handled under enterprise-grade protections while for unlicensed, the data may be used for service improvement.
Note: You can monitor and reduce the risk of sensitive data exposure by using Data Loss Prevention (DLP) policies in Microsoft Purview. However, DLP does not disable the functionality at a tenant level completely. It works as a governance and protection layer.