Event details
We are very excited to announce a Microsoft 365 Copilot and Bing Chat Enterprise AMA! Get answers to your questions around Microsoft 365 copilot and Bing Chat enterprise from our team of experts
*For questions about any details about the Early Access Program (EAP), please address them to your Microsoft account team.
For an overview of the recent Inspire announcements on the Copilot and Bing Chat Enterprise, check out this post:
How does it work?
We will have a broad group of product experts, servicing experts, and engineers representing Microsoft 365 Copilot and Bing Chat enterprise.
They will be standing by here -- in chat during the live hour -- to provide guidance, discuss best practices, and, of course, answer any specific questions you may have.
Post your questions in the comments early and throughout the one-hour event.
| Note: This is a chat-based event. There is no video or live meeting component. Questions and answers will appear in the Comments section below. Please post each question as a new comment |
217 Comments
- Markku JaatinenBrass ContributorWhat are the admin tools see what semantic indexing is doing as it has an important role in Copilot?
- GabeHoFormer EmployeeHey Makku, thank you for the feedback. We're determining what the right approach here is and recognize its importance.
- Markku JaatinenBrass ContributorIs there a difference how BCE works depending on if the employee has or does not have Copilot active? Can BCE in these cases have access to more company data compared to when it is used as a stand-alone service?
- JaredAndersen
Microsoft
Great question! Yes, if you have M365 Copilot deployed, then Bing Chat Enterprise becomes powered by Copilot. This means there are two big changes: 1) BCE will be able to answer the same questions about your Microsoft Graph data that Microsoft 365 Copilot can answer, and 2) all chats and responses will take place inside the tenant boundary, meaning that BCE inherits the compliance and security architecture of M365 Copilot. This will effect your experience on Bing.com/chat, Edge sidebar, and Windows Copilot.
- jmendoza4itCopper ContributorWill the legacy Office 365 Business Premium license count as M365 Business Standards and if they meet the requirement for Copilot ?
- Yana_Terukhova
Microsoft
The prerequisites are only Microsoft 365 Business Standard and Microsoft 365 Business Premium.
- jmendoza4itCopper ContributorTo utilize all of Microsoft 365 CoPilot would be it best to start learning about how to leverage Microsoft Search and Microsoft Graph? Is the LLM used for Copilot different from what is being used for Bing Chat Enterprise and can we expect to get more factual/ accurate response from Copilot compared to Bing Chat Enterprise? It looks like there will be plugins for Copilot, specifically one with Chatgpt, can you provide some insight on how that might work?
- GabeHoFormer EmployeeHey Joshua, check out this blog on how to prepare for Copilot https://techcommunity.microsoft.com/t5/microsoft-365-copilot/how-to-prepare-for-microsoft-365-copilot/ba-p/3851566. The LLMs for Bing Chat Enterprise and Copilot are different since they are trained for different use cases. Kevin mentioned this above in response to another question, "Copilot is less likely to hallucinate than many chat-connected LLMs systems because it grounds prompts before calling the LLM. The grounding of those prompts adds the context and content needed to help the LLM get to a more accurate answer more often. That said, we do know that Copilot will at times be usefully wrong, and have built the UX and UI to help the user recognize this, and work with Copilot to get to a more accurate answer." To learn more about connectors, check out this article https://learn.microsoft.com/en-us/microsoftteams/platform/copilot/how-to-extend-copilot#how-to-make-your-graph-connector-work-better-with-copilot
- jmendoza4itCopper ContributorCan you provide some insight on how the deployment of CoPilot is from a technical perspective? Currently the document mentions that there we will need to prepare our data before deploying, any clarification on that?
- Someone449Brass ContributorI agree - it states/implies that customers should deploy data classification/labels, etc. before deploying Microsoft Copilot. That's a significant obstacle to Copilot deployment.
- jmendoza4itCopper ContributorCan you provide some insight on how the deployment of CoPilot is from a technical perspective? Currently the document mentions that there we will need to prepare our data before deploying, any clarification on that?
- BA_MaxIron ContributorUnsure if you've seen this link: https://adoption.microsoft.com/en-us/copilot/
- Pat BeahanBrass Contributor
what is the official Msft positioning on how you are addressing the OWASP top 10 LLM vulnerabilities for CoPilots/BCE?
- Prompt injections: Bypassing filters or manipulating the LLM using carefully crafted prompts that make the model ignore previous instructions or perform unintended actions.
- Data leakage: Revealing sensitive information in the LLM’s responses, such as personal data, credentials, secrets, or confidential data.
- Inadequate sandboxing: Failing to isolate the LLM from the underlying system or network, allowing unauthorized access or malicious activities.
- Unauthorized code execution: Executing arbitrary code through the LLM’s responses or inputs, such as shell commands, scripts, or malware.
- Server-side request forgery (SSRF) vulnerabilities: Making requests to internal or external resources through the LLM’s responses or inputs, such as APIs, databases, or web services.
- Overreliance on LLM-generated content: Trusting the LLM’s outputs without proper verification or validation, leading to misinformation, deception, or fraud.
- Inadequate AI alignment: Failing to align the LLM’s objectives and values with those of the users or stakeholders, resulting in unethical, biased, or harmful outputs.
- Insufficient access controls: Allowing unauthorized users to access or modify the LLM’s settings, parameters, or data.
- Model denial of service (DoS): Preventing the LLM from functioning properly by exhausting its resources, such as memory, CPU, or disk space.
- Insecure output handling: Failing to properly filter or encode the LLM’s outputs, leading to cross-site scripting (XSS), SQL injection, or other injection attacks.
- Pat BeahanBrass ContributorCan Copilot and your Semantic index, Cognitive Search capabilities help ensure the prompt responses include a heads on the potential data classification of the prompt result. Ie if we use purview to provide the data classification on the documents, it would be ideally if automagically (or with some good code examples) you can show that when source documents are used to provide data for a response, the response includes the data classifications from purview of all source documents used - ie user gets a heads up that the result text may contain restricted confidential information
- Someone449Brass ContributorShowing the source docs comprising Copilot answers would be very helpful for validation purposes. I hope it works in version 1.0 like this similar to how BCE cites its sources using URLs.
- Pat BeahanBrass Contributordoes Msft have any usage stats for a typical enterprise modeling out usage adoption curve over tume , # of employees, # Prompts, token usage over time that you can share?
- Ken EwertFormer EmployeeAs we are still in our early access program phase and so adoption metrics like you describe above are still skewed to a closed program construct and likley would not be representative of broader adoption patterns at this time.
- Pat BeahanBrass Contributorfor Bingchat Abuse monitoring, what data is used for detection, review. Is there UserID information or it's it just at the tenant and source IP of the prompt level of information.
- Eric_VanAelstyn
Microsoft
Hi Pat, for more information on Bing Chat abuse monitoring, and further information on our how we are approaching AI with Bing, please review The new Bing - Our approach to Responsible AI: https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWXpcT