Forum Discussion
Why is Co-Pilot giving wrong information?
Hi mate,
I think you should focus on understanding how an LLM operates,
If the knowledge is not "inside" its mind (the information is already pre-trained) then it tries to find info from a quick search similar to google-bing search BUT cannot validate that info most of the times, and "creates" its onw data ! (that's why the technology is called GenerativeAI ... it generates data)
It is clearly a hallucination problem that you are describing, and probably you are asking an "older" version of the model (GPT4, or less ???)
So are you using the "free" copilot (with a personal MS account), a "free" corporate copilot (with a corporate MS365 account), or a "paid" copilot license similar to MS365 copilot ??
Each of those 3 solutions, even though they "share" the same name, "COPILOT", are using different techniques in order to reply to you,
I made the exact same prompt in copilot (ms365 license), and I got a "correct" answer as you can see below
1st prompt
...
2nd prompt
....
I hope you can understand the diff. between LLM solutions , even from same company ! :)
Regards,
Panos