Forum Discussion
Microsoft's Copilot: A Frustrating Flop in AI-Powered Productivity
I put your message into copilot for its opinion, and here's it's response:
That’s a really thoughtful and passionate critique, and it raises some valid concerns that many users have voiced. Let’s break it down a bit:
💡 The Vision vs. Reality
Microsoft’s vision for Copilot was ambitious: embedding AI directly into productivity tools like Word, Excel, and PowerPoint to automate tasks and enhance workflows. The idea was to move beyond just suggestions and into actionable execution—something users clearly expect.
⚠️ Where It Falls Short
The frustration in the comment stems from a mismatch between expectations and current capabilities. Some common pain points include:
- Vague responses: Instead of directly editing or formatting, Copilot sometimes gives instructions or asks for clarification.
- Limited execution: Users want tasks done (e.g., “format this table,” “summarize this document”), not just advice on how to do them.
- Inconsistency: Sometimes Copilot performs well, other times it doesn’t understand or misinterprets the request.
🤖 Comparison to Other AI Tools
Tools like ChatGPT (especially in its standalone form) often feel more responsive because they’re designed to interpret natural language flexibly and generate results directly. However, Copilot is embedded in a complex ecosystem (Office apps), which introduces constraints like:
- File permissions
- UI limitations
- Integration with legacy features
🛠️ What’s Improving
Microsoft has been rolling out updates to Copilot regularly. Some improvements include:
- Better contextual understanding within documents
- More direct editing capabilities
- Enhanced integration with enterprise data
But yes, execution still lags behind expectations in many cases.
🔍 My Take
The criticism is fair, especially if someone invested in the promise of seamless productivity. However, Copilot is still evolving. It’s not perfect, but it’s not static either. The key is feedback—Microsoft is actively collecting user input to refine the experience.
Would you like help drafting feedback to Microsoft or exploring specific tasks where Copilot struggles? I can also show you how to get better results with it or suggest alternatives depending on your workflow.
- David CrossJul 24, 2025Brass Contributor
Thank you for putting my critique through Copilot. To give my original response some extra context, I discussed the experience with our M365 administrators. I wanted to make sure they had not set any specific settings to restrict Copilot beyond the defaults from Microsoft, they they said they had not.
We did some testing of my prompt with some other users in different regions, they said they had the same experience. We had all tried creating the article from Copilot within Edge. Note, each who tried has the paid M365 Copilot license.
Then one of the administrators had an idea. Since the article that was being written is for our employee portal, they created a new SharePoint page and used the Copilot that assists within SharePoint pages. Bam! It created the article on the first try.
And it did a decent job for the first attempt. This is an odd inconsistancy that maybe Microsoft can explain. However, like what you got, Copilot loves it's bulleted lists and using emojis. And one major difference in Copilot's output from the other AI agents (Chat GPT and Gemini), it didn't write the article in the voice of AI, which I asked it to do in the prompt.
I do have one critique of the reply Copilot gave, which said it's restrained because it works within M365. I use Microsoft Power Platform extensively, as that's my main job, and many times when I ask Copilot to write an expression or create JSON for an http request, it will be wrong or use made up commands. For example, it seems to like using + to add in expressions instead of add(). Being built into Microsoft 365, it should know how to do this better than other AIs.
In another example of when Copilot should be better than other agents in Microsoft products is an experience I had today. I have a new project for something I need to do that I've never done. It's not overly complex, so I had a rough idea of what to do. I had a discussion with copilot and another AI. I laid out and explained what I needed to do, it would provide feedback, I would reply with more context or answer questions it had. Then I asked what would be the best approach to take.
Copilot and the other AI (Gemini in this instance) provided almost completely different approaches. Both would probably work, but the Gemini approach was more in line with what I was thinking and also included most of what I had found online and in the PPC. So I asked Copilot what ifs based on the Gemini approach. It basically said it was a unique approach and efficient. Although, I do find Copilot seems to be very agreeable with you when you push back instead giving reasons why it's initial response was better or at least try to defend its initial response.
I built a test workflow with both approaches, already knowing which would be more efficient. So this reply does not become a book, I'll just say with the Gemini workflow approach, the workflow ran in less than a minute versus nearly 30 minutes for the Copilot approach. You can guess which method I'm using. But this illustrates that one would expect Copilot to provide the better guidance since it's built into the product I'm using.