Forum Discussion

pieterjbogaersgmailcom's avatar
pieterjbogaersgmailcom
Copper Contributor
Jul 30, 2025

Copilot-voice denying my observations in Edge-window

🤖 Copilot in Edge: Why Is Voice Mode Less/NOT Context-Aware?

I keep wondering: how can there be a difference between Copilot in voice mode and typed/manual mode when it comes to interpreting page content in Edge?

I regularly ask Copilot questions about what I’m viewing in the browser, and when using keyboard input, its interpretation is impressively accurate. It seems to understand the page’s context quite well, even when I don’t mention specific names or links.

Yet when I try the same thing via voice input, its response is oddly vague; almost as if it doesn’t "know" what's on the screen at all. And it categorically denies 

đź§  What's Going On Here?

  • Typed mode likely connects more directly to Edge’s page analysis tools, allowing Copilot to "read" tab contents intelligently.
  • Voice mode may treat questions more literally—when asked “What do you see?”, it interprets that as visual perception (which it obviously doesn’t have), rather than contextual page awareness.
  • It could also be a difference in how Edge routes context to Copilot depending on input method—perhaps voice input isn't wired into the same browsing context API.

đź’¬ The Result?

Typed queries = rich, context-aware responses.
Voice queries = cautious, sometimes evasive replies, denying the ability to analyse the opened webpage.

It’s a strange gap for a product that otherwise feels so seamlessly integrated. Anyone else noticed this?

No RepliesBe the first to reply

Resources