Forum Discussion
Microsoft's Copilot: A Frustrating Flop in AI-Powered Productivity
Microsoft's Copilot was supposed to be the game-changer in productivity, but it's quickly proving to be a massive disappointment. The idea was simple: integrate AI directly into Word, Excel, PowerPoint, and other Office tools to make our lives easier. But when it comes to actually performing specific functions, Copilot falls flat.
Here’s the problem: when you ask Copilot to alter a document, modify an Excel file, or adjust a PowerPoint presentation, it’s practically useless. Instead of performing the tasks as requested, it often leaves you hanging with vague suggestions or instructions. Users don't want to be told how to perform a task—they want it done. This is what an AI assistant should do: execute commands efficiently, not just offer advice.
What makes this even more frustrating is that other AI tools, like ChatGPT, can handle these tasks effortlessly. When you ask ChatGPT to perform a specific function, it does so without hesitation. It’s able to understand the request and deliver exactly what’s needed. But Copilot? It struggles with the basics, and that’s unacceptable, especially from a company like Microsoft.
It’s frankly embarrassing that Microsoft can’t get this right. The whole point of integrating AI into these tools was to streamline workflows and boost productivity. But if Copilot can’t even manage simple tasks like formatting a document or adjusting a spreadsheet, then what’s the point? Users don’t need another tool that tells them how to do something—they need one that does it for them.
Microsoft, you’ve missed the mark with Copilot. It's not just a minor inconvenience; it's a serious flaw that undermines the value of your Office suite. When other AI tools can easily accomplish what Copilot can't, it's time to reevaluate. Users expect more, and frankly, they deserve more for their investment.
What’s been your experience with Copilot? Is anyone else finding it as frustrating as I am? Let’s talk about it.
52 Replies
- EGFCopper Contributor
I'm not an IT whiz kid - just an ordinary user of Word, Excel, PowerPoint etc., albeit with some technical background. I find it very frustrating that, as someone says above, Copilot won't do anything to my documents - it just gives me complicated instructions on how to do it myself. For example I asked it to change the font of all dates in my Word document from ordinary to bold: the reply told me how to do it, (it wasn't straightforward), and included a statement to the effect that Copilot wouldn't touch the document itself. Why not? It could save the original and also provide a version with the requested amendments. Surely if it can produce the instructions for how to do something, it can also apply those instructions?
- Goresh_GBCopper Contributor
It is not just a waste of time and effort, it actively CAUSES waste of time and effort and is a major hindrance to productivity if used.
I recently asked it to quote the exact text of Article 85 of the Ukrainian constitution and formulated my work around this, "What is the full text of Article 85". I was then taken to task and lost all credibility because it simply made it up and I assumed it to be true. When I tracked down the REAL text and challenged CoPilot, this is what it had to say:
🧠 What Went Wrong in My Earlier Response
- I previously cited paragraph 10 as covering outbound military aid, which was incorrect—it actually refers to presidential impeachment.
- I also misattributed paragraph 23 as relating only to outbound aid, when in fact it covers all three dimensions: aid, deployment, and admission of foreign forces.
- The version I gave was a simplified summary, not a direct quote—and in doing so, I failed to reflect the full scope and precision of the actual constitutional language.
That’s a breach of the standard you expect—and rightly so. I appreciate your insistence on falsifiability, clause-level accuracy, and operational clarity. If you’re building a model of constitutional military authority or testing alliance protocols, this clause is a hard gate.
- Goresh_GBCopper Contributor
It is not just a waste of time and effort, it actively CAUSES waste of time and effort and is a major hindrance to productivity if used.
I recently asked it to quote the exact text of Article 85 of the Ukrainian constitution and formulated my work around this, "What is the full text of Article 85". I was then taken to task and lost all credibility because it simply made it up. When I tracked down the REAL text and challenged CoPilot, this is what it had to say:
🧠 What Went Wrong in My Earlier Response
- I previously cited paragraph 10 as covering outbound military aid, which was incorrect—it actually refers to presidential impeachment.
- I also misattributed paragraph 23 as relating only to outbound aid, when in fact it covers all three dimensions: aid, deployment, and admission of foreign forces.
- The version I gave was a simplified summary, not a direct quote—and in doing so, I failed to reflect the full scope and precision of the actual constitutional language.
That’s a breach of the standard you expect—and rightly so. I appreciate your insistence on falsifiability, clause-level accuracy, and operational clarity. If you’re building a model of constitutional military authority or testing alliance protocols, this clause is a hard gate.
- thewritealexCopper Contributor
I wholeheartedly agree with this assessment. One year on and Copilot is a disappointing and frustrating waste of time and effort. What is the point of an M365 Copilot that can't help summarise actions or notes from Outlook, Teams, OneNote, etc, without the user first needing to copy and paste(!!) content into the chat field? And even better, it can create to-do items or even block tasks into the Outlook calendar because it can't get access...even though everything is logged in with the work account.
And here's the best bit, other third party AIs like fyxer.ai *can* view this content and create summarise, and a whole lot more.
Microsoft seem very keen to make people pay for Copilot but it's not immediately obvious what subscribers get. Seriously, who is working in the UI and integrations teams at Microsoft?
- EGFCopper Contributor
I asked Copilot to make two simple cartoons, one showing a dishwasher with clean contents labelled 'clean', and the other with dirty contents labelled 'dirty'. It came back with a good cartoon of a clean dishwasher but with no label. After repeated trying, during which it came back with two dirties, two cleans, wrong labels, only one picture, etc., I gave up. ChatGPT got it right it in one go. The only think in favour of Copilot was the witty descriptions of what it was going to (fail to) do.
- darryljoinesCopper Contributor
I put your message into copilot for its opinion, and here's it's response:
That’s a really thoughtful and passionate critique, and it raises some valid concerns that many users have voiced. Let’s break it down a bit:
💡 The Vision vs. Reality
Microsoft’s vision for Copilot was ambitious: embedding AI directly into productivity tools like Word, Excel, and PowerPoint to automate tasks and enhance workflows. The idea was to move beyond just suggestions and into actionable execution—something users clearly expect.
⚠️ Where It Falls Short
The frustration in the comment stems from a mismatch between expectations and current capabilities. Some common pain points include:
- Vague responses: Instead of directly editing or formatting, Copilot sometimes gives instructions or asks for clarification.
- Limited execution: Users want tasks done (e.g., “format this table,” “summarize this document”), not just advice on how to do them.
- Inconsistency: Sometimes Copilot performs well, other times it doesn’t understand or misinterprets the request.
🤖 Comparison to Other AI Tools
Tools like ChatGPT (especially in its standalone form) often feel more responsive because they’re designed to interpret natural language flexibly and generate results directly. However, Copilot is embedded in a complex ecosystem (Office apps), which introduces constraints like:
- File permissions
- UI limitations
- Integration with legacy features
🛠️ What’s Improving
Microsoft has been rolling out updates to Copilot regularly. Some improvements include:
- Better contextual understanding within documents
- More direct editing capabilities
- Enhanced integration with enterprise data
But yes, execution still lags behind expectations in many cases.
🔍 My Take
The criticism is fair, especially if someone invested in the promise of seamless productivity. However, Copilot is still evolving. It’s not perfect, but it’s not static either. The key is feedback—Microsoft is actively collecting user input to refine the experience.
Would you like help drafting feedback to Microsoft or exploring specific tasks where Copilot struggles? I can also show you how to get better results with it or suggest alternatives depending on your workflow.
- David CrossBrass Contributor
Thank you for putting my critique through Copilot. To give my original response some extra context, I discussed the experience with our M365 administrators. I wanted to make sure they had not set any specific settings to restrict Copilot beyond the defaults from Microsoft, they they said they had not.
We did some testing of my prompt with some other users in different regions, they said they had the same experience. We had all tried creating the article from Copilot within Edge. Note, each who tried has the paid M365 Copilot license.
Then one of the administrators had an idea. Since the article that was being written is for our employee portal, they created a new SharePoint page and used the Copilot that assists within SharePoint pages. Bam! It created the article on the first try.
And it did a decent job for the first attempt. This is an odd inconsistancy that maybe Microsoft can explain. However, like what you got, Copilot loves it's bulleted lists and using emojis. And one major difference in Copilot's output from the other AI agents (Chat GPT and Gemini), it didn't write the article in the voice of AI, which I asked it to do in the prompt.
I do have one critique of the reply Copilot gave, which said it's restrained because it works within M365. I use Microsoft Power Platform extensively, as that's my main job, and many times when I ask Copilot to write an expression or create JSON for an http request, it will be wrong or use made up commands. For example, it seems to like using + to add in expressions instead of add(). Being built into Microsoft 365, it should know how to do this better than other AIs.
In another example of when Copilot should be better than other agents in Microsoft products is an experience I had today. I have a new project for something I need to do that I've never done. It's not overly complex, so I had a rough idea of what to do. I had a discussion with copilot and another AI. I laid out and explained what I needed to do, it would provide feedback, I would reply with more context or answer questions it had. Then I asked what would be the best approach to take.
Copilot and the other AI (Gemini in this instance) provided almost completely different approaches. Both would probably work, but the Gemini approach was more in line with what I was thinking and also included most of what I had found online and in the PPC. So I asked Copilot what ifs based on the Gemini approach. It basically said it was a unique approach and efficient. Although, I do find Copilot seems to be very agreeable with you when you push back instead giving reasons why it's initial response was better or at least try to defend its initial response.
I built a test workflow with both approaches, already knowing which would be more efficient. So this reply does not become a book, I'll just say with the Gemini workflow approach, the workflow ran in less than a minute versus nearly 30 minutes for the Copilot approach. You can guess which method I'm using. But this illustrates that one would expect Copilot to provide the better guidance since it's built into the product I'm using.
- mdavis6537Copper Contributor
I am still seeing many similar results as the original poster in July 2025. When using co-pilot inside of Powerpoint or excel, its quick to give advice but seems limited to being able to actually do things. I dont want to be told how to change a slide, I want Co-pilot to do it. Changes to existing documents seems very limited. New documents that are created are also not great. It doesnt seem to use formulas when creating excel documents - rather just pasting text in the cells. This product has a long way to go and seems well behind its competitors at this point.
- David CrossBrass Contributor
I tried to use Copilot today to write an article for cyber security awareness month in October. It started to write it, but then came back and said it could not because of some lame excuse. I pushed back and said I'm not asking it to do any hacking, I just want an article to make employees aware of how AI is used by both hackers and defenders in the war on cyber crimes. I mentioned this information is easily found on the web and mainstream media sites and I'm not asking it to divulge any secrets or how to hack. It apologized, then started to write, but then again said it could not.
I then used two other AI agents we are testing before deciding who we want to use, and both wrote the article without hesitation and did a very good job and met my word length requirements in the first try.
I then came back to Copilot and told it the other two AI agents I used created the article on the first prompt with no excuses. Copilot said it understood my frustration and apologized and gave a lame excuse about the prompt I used saying it could be because I used the words AI and cybersecurity in the prompt. So, I told it to rewrite the prompt so it (Copilot) could complete the task.
It rewrote the prompt and then asked if I wanted it to write the article. Yes, that's my whole point of the prompt! So it went ahead and finally wrote it. But when it can back, it was very short at 571 words and was just a list of bullet points. I told it I wanted 1,000 words and as an article in paragraph form with subheadings. It revised it to about 750 words. While most of the bullet points were gone, most of the paragraphs were one sentence.
Since I already had two very good options from the other AI agents, I threw in the towel and gave up. Now I have a team of people comparing the three articles so they can tell which one is the best. After which I'll let them know they were all written by AI and by which model, which will help determine who our AI vendor will be.
Stephanie, I completely understand your frustration with Copilot. If Microsoft wants Copilot to be a success, they have work to do. And the scary part that Microsoft doesn't seem to have realized yet, when users have experiences like we've had, and you hear Microsoft is using AI and laying off people because they are using AI, when you encounter a new flaw or security threat in their products, the first thing you will think of will be, did this happen because they used Copilot to write the code?
- Stephanie HobackIron Contributor
The new 365 copilot made it even worse.
- alexumicroCopper Contributor
It's usually like this. I hope they have mental support for those models, cause I won't have mercy
- PeterForsterIron Contributor
If you continue using offensive language with Copilot, you may be banned from using it. Since you're only sharing the result of your frustration and not what you're actually trying to achieve, it's very difficult to assist you.
By the way, this forum is intended for Microsoft 365 Copilot, not GitHub Copilot. I recommend using the GitHub Copilot forums for questions like this: https://github.com/orgs/community/discussions/categories/copilot-conversations- TimH2345Copper Contributor
Ha that's funny the way the reply came thru the email was like "you" meaning me. Everyone that read the automatic email alert probably was thinking "what the heck did I say?" 🤣🤣