Mar 25 2024 04:07 PM
I've been exploring Microsoft Copilot for the last few months, aiming to integrate it into our workflow for its purported advanced AI capabilities, supposedly akin to what we've seen with ChatGPT. However, my experience has been less than satisfactory, raising several concerns about its current effectiveness and readiness for "prime time" use in professional environments.
Firstly, the response times and the overall fluidity of interaction with Copilot have been notably slower and more cumbersome than what I've experienced with ChatGPT. This difference is stark, considering both products are said to leverage the same underlying OpenAI Platform
One of the fundamental issues I've encountered with Copilot is its inability to accurately perform simple tasks that I would expect to be straightforward, such as adding up columns in Excel correctly. These are not complex queries but basic functionalities that significantly impact daily productivity.
Furthermore, when attempting to utilize Copilot to pull information from tagged documents, the results have been consistently disappointing. Either it fails to retrieve the needed information or, worse, provides incorrect answers. This reliability issue is a significant barrier to trust and dependency on the tool for professional purposes.
In contrast, ChatGPT Team license has demonstrated a more robust performance. Whether it's querying document information or needing swift, accurate responses, ChatGPT consistently delivers a superior experience. The speed and reliability difference is night and day, making me question the current value proposition of onboarding more users to Copilot.
Given these challenges, I'm reaching out to this community to hear your thoughts and experiences:
Wondering if it's just me noticing these things or not.
Mar 29 2024 05:05 PM - edited Mar 29 2024 05:07 PM
SolutionFollowing my previous concerns with Microsoft Copilot, a recent article from Business Insider caught my eye, revealing a familiar narrative. Users have echoed complaints about Copilot's performance issues, comparing it unfavorably with ChatGPT. Surprisingly, Microsoft's response suggests a gap in user knowledge rather than addressing the tool's capabilities.
This rationale strikes me as rather perplexing. My experiences, alongside feedback from many of you, highlight that the issues with Copilot extend beyond user error. Whether it's the version of the Office product (I have tried multiple versions) or the precision in our prompts (I think us IT folk know how to write a prompt), the outcomes are consistently disappointing: slow responses and inaccuracies that can't simply be chalked up to a learning curve. Tagging reference materials clearly does little to improve reliability or accuracy, undermining the tool's supposed intelligence and utility.
To suggest user incompetence rather than acknowledging the product's current limitations is disheartening. It reminds me vividly of past promises around Teams calling capabilities, which purportedly reached parity with Skype for Business – a claim that, for many, fell short in practice.
Let's not repeat history by undermining valid user feedback. The issues with Copilot seem less about how we're using it and more about the product not living up to its potential. Insinuating that the problem lies with user aptitude not only misses the mark but also risks alienating those of us eager to embrace and integrate these AI advancements into our workflows.
I'm curious to hear your thoughts. Have you found ways to navigate these challenges with Copilot, or do you share the sentiment that the problem lies not in our usage but in the product itself?
Apr 19 2024 09:17 AM
@Roland Weathers I can confirm all of your concerns. The documents are hard to tag, they must be on One drive to be visible, and visibility is delayed. But people often work with downloaded files which sit on a local drive. But that's a small annoyance in comparison to Copilot's inability to perform substantial task of analyzing and comparing two or more documents. This is something both ChatGPT and Claude excel at. This is key functionality for business applications. MS developers either should fix it, or see their product fade into oblivion.
Mar 29 2024 05:05 PM - edited Mar 29 2024 05:07 PM
SolutionFollowing my previous concerns with Microsoft Copilot, a recent article from Business Insider caught my eye, revealing a familiar narrative. Users have echoed complaints about Copilot's performance issues, comparing it unfavorably with ChatGPT. Surprisingly, Microsoft's response suggests a gap in user knowledge rather than addressing the tool's capabilities.
This rationale strikes me as rather perplexing. My experiences, alongside feedback from many of you, highlight that the issues with Copilot extend beyond user error. Whether it's the version of the Office product (I have tried multiple versions) or the precision in our prompts (I think us IT folk know how to write a prompt), the outcomes are consistently disappointing: slow responses and inaccuracies that can't simply be chalked up to a learning curve. Tagging reference materials clearly does little to improve reliability or accuracy, undermining the tool's supposed intelligence and utility.
To suggest user incompetence rather than acknowledging the product's current limitations is disheartening. It reminds me vividly of past promises around Teams calling capabilities, which purportedly reached parity with Skype for Business – a claim that, for many, fell short in practice.
Let's not repeat history by undermining valid user feedback. The issues with Copilot seem less about how we're using it and more about the product not living up to its potential. Insinuating that the problem lies with user aptitude not only misses the mark but also risks alienating those of us eager to embrace and integrate these AI advancements into our workflows.
I'm curious to hear your thoughts. Have you found ways to navigate these challenges with Copilot, or do you share the sentiment that the problem lies not in our usage but in the product itself?