Forum Discussion
Bing Chat / Copilot "Invents" fake "facts" - "I don't know" should be a valid answer!
I (and others) have clearly worked too hard, and with uneven (unhinged?) effectiveness, at getting this thing to express itself poetically. That was... something. WHAT, I'm not sure. A little over-the-top, there, Bingbot. 🤣
We all wish you'd just learn to say, "I don't know." You, and Humanity, would survive longer.
Sorry everyone!
I don't have much confidence in Bing AI because it uses multiple sources and it's normal for it to write weird responses 🙂
Which should only be verified by human intelligence.
- HollyJ2022Mar 07, 2024Copper ContributorAnd why are you sorry? Did you develop it? 😉
- DeletedMar 07, 2024
I apologized because for me, AI bugs are a matter of course!
I'd like, like everyone else, for this Microsoft tool to work reliably -> but I believe that for technical and information security reasons, improvements will always be needed!
This is the first year of Bing AI, we'll see what changes in a year.
- HollyJ2022Mar 07, 2024Copper ContributorMy concern is the number of people out there who don't see it as an unreliable toy--er, tool--that we are all "training" somewhat badly, at times, and who can be easily deceived by it (I'm a writer, but I am MUCH more worried about credible fraud, deep fakes (image, video, and audio especially), cybersecurity, and corporate cost-cutting using AI (think of Boeing or a nuclear facility software update "aided" by AI - we're all gonna die).
- HollyJ2022Mar 07, 2024Copper ContributorWe should all use primary sources or verify information via multiple sources, so I'm not sure using multiple sources is a flaw.
The problem with "should be verified by human intelligence" is the rapid proliferation of garbage sources through their irresponsible use of AI in the first place.