Blog Post

Microsoft Viva Blog
7 MIN READ

Research Drop: Fighting AI Slop with Meta-Cognition around AI

Megan_Benzing's avatar
Megan_Benzing
Icon for Microsoft rankMicrosoft
Feb 10, 2026

Research Drop in Brief:

  • When employees aren’t intentional or thoughtful with their AI use, it can drive quality issues and increase “workslop,” and 60% of employees report skipping accuracy checks.
  • Employees need to build strong meta-cognition skills, including knowledge about effective AI use and regulation of their AI use.
  • Meta‑cognition around AI strongly predicts perceived value, correlating with RIVA (r = .748) and RTVA (r = .597), meaning that when employees think critically about their AI approach and output, they get more value from using AI.
  • Strong levers for meta-cognition around AI aren’t necessarily technical: When leaders role model AI use (boosts meta‑cognition by 22-percentage points) and teams feel psychologically safe (boost meta-cognition by 16-percentage points), employees engage more thoughtfully with AI.

 

Across industries, organizations are accelerating their investment in AI tools and encouraging employees to incorporate them into daily workflows. In some cases, AI use is becoming an explicit expectation – employees are encouraged, nudged, or even monitored to demonstrate consistent AI activity. In some cases, AI licenses are reallocated if usage is low, and team metrics increasingly reflect “AI integration.”

But “adoption for adoption’s sake” overlooks a critical factor: the quality of AI use matters as much as the quantity. AI use without intentional habits can degrade quality, erode trust, and even harm team performance.

We’re now seeing the rise of “workslop,” which is low quality AI generated output produced quickly but without depth, accuracy, or care1. Not only does workslop undermine performance, it also erodes trust between colleagues and damages reputations. Concerningly, our research shows that 60% of employees skip accuracy checks when using AI, which accelerates the AI slop prevalence.

Low intent-in, low-quality out. The remedy isn’t more adoption, it’s better habits and more disciplined practices. For this month’s Research Drop, we explore a concept often referenced in learning science but rarely applied to workplace AI: meta-cognition.

Thinking about thinking: Introducing meta-cognition around AI

Meta‑cognition, at its simplest, is the act of thinking about your own thinking.2 It’s the idea that someone can notice what they are doing and then regulate how they are doing it. In learning research, we see that meta-cognition helps people not only execute tasks but regulate how they execute them. It strengthens judgment, improves quality, and drives continuous improvement. Meta‑cognition is what turns a repeated behavior into a refined skill.

When it comes to AI use, we adapted this theory and applied it to employees’ AI use. This two-step meta-cognition around AI process emerged:

  • Knowledge of how to use AI
    • Acknowledge skill level
    • Understand methods (e.g., retrieval vs. synthesis vs. planning)
    • Choose the right approach for the task
  • Ongoing regulation of AI use
    • Select an appropriate plan (e.g., decompose task; choose tools)
    • Review output for relevance and accuracy
    • Evaluate success and iterate

Increased knowledge of how to use AI helps employees avoid defaulting to “generic prompting” and instead make intentional decisions about how AI should support the task at hand. Then regulating their AI use transforms AI from a passive response generator into an active collaborator.

Using these meta-cognition practices together, employees can evolve from being bystanders of AI use to becoming cognitive partners with AI, building judgment and skill as they go. This process roots AI use in intent rather than convenience.

When meta-cognition turns adoption into value

Our research found that meta-cognition around AI goes beyond a framework – it predicts meaningful differences in AI value.

Employees who are intentional about their AI use can leverage that reflection to refine and improve their output. In our research, we found that when employees know their “what” and their “how” they are more likely to see the value from using AI – beyond just time savings. There is a strong correlation between meta-cognition around AI and Realized Individual Value of AI (RIVA; r = .748).

This matters because organizations often focus on adoption metrics (e.g., logins, prompt counts, volume of use), when another driver of ROI lies in the quality of the interaction. High intent users get better output, make fewer mistakes, and build durable skills over time.

And it’s not just individual value that is positively impacted – there is also a strong correlation between meta-cognition around AI and Realized Team Value of AI (RTVA; r = .597).

 

 

Teams with strong meta‑cognitive habits share better prompts, collaborate more effectively, and align more consistently on accuracy and quality standards. AI becomes a mechanism for collective improvement, not just personal productivity.

These good habits scale – and when they do, so can impact.

Leveraging culture to build meta-cognition muscles

The use of AI is rapidly growing, and employees need structures that help them navigate new expectations and evolving norms. It becomes increasingly important to make sure employees feel supported throughout the change and leaders/managers play a powerful role in shaping this experience.

Leaders must role model intentional AI use

Many organizational changes are inherently social – as new tools, processes, or structures are rolled out, there is a socialization process that occurs. Formal socialization can take shape as corporate communications, guides, and documentations. But often informal socialization can make the most impact. Employees look to leaders, peers, and influential team members to understand not just what a change is, but how it should be done.

Yet when it comes to AI, one study found that only 41% of employees reported receiving encouragement from leadership without detailed instructions or having the contextual understanding of how to use AI in a meaningful way3. And our latest research shows that role modeling itself is inconsistent: 74% of leaders report seeing effective role modeling compared to just 61% of managers and 49% of ICs. Leaders need to go beyond blanketed encouragement to tangible role modeling so create that clarify and investment for their employees.

The impact of true modeling is substantial: Employees who observe leaders demonstrating effective AI use show a 22-percentage point increase in meta-cognition around AI compared to those who don’t.

 

 

Without knowing a deeper what/how, employees are left on their own to figure out how to best implement AI into their work and what output standards might look like, which can result in a lack of proper usage and AI slop. However, if leaders can be transparent and share:

  • what task they used AI for
  • why they chose that approach
  • how they evaluated output
  • what they adjusted

This gives employees a blueprint for building their own meta‑cognitive habits and approaches to leveraging AI at work. Role modeling can help create a consistent standard for output expectations – providing thoughtful guidance on how employees should think about weighing time savings, quality, and depth when working with AI. This iterative experience also helps leaders continue build upon their own meta-cognition muscles, creating a learning cycle throughout the organization.

Psychological safety enables honest conversations about AI use

AI isn’t just a tool, it reshapes collaboration. When employees use AI to generate work, draft content, or prototype ideas, that output enters team workflows. If an employee receives low-effort, AI-generated work, their opinion of their colleague lowers, such as their perception of their reliability and trustworthiness1.  And even if the work isn’t poor quality, one study found that 57% of employees admit to hiding their AI use at work4. When employees are not openly discussing their AI use at work, it prevents learning opportunities and cross-team upskilling. It can also broaden knowledge gaps between employees, where some team members are speeding ahead in AI fluency and practice, while others are being left behind.

Psychological safety can help reverse this dynamic. This environment not only increases effective AI adoption, but it can create trust between colleagues. Teams can:

  • share their AI approaches
  • critique and refine outputs together
  • discuss missteps without fear
  • co‑develop higher‑quality workflows

Through these processes, AI becomes a shared learning process rather than an individual performance risk.

We found a 16-percentage point difference in meta-cognition around AI between employees who feel psychologically safe at work and those who don’t.

 

 

When employees are open about their AI use, they can share best practices and teach each other to regulate and critically think about their use cases. They can also hold each other accountable to quality and avoid unintentionally sending AI slop downstream. AI transformation is a social process and requires safe team cultures that encourage sharing and iterating together.

When leaders and teams are intentional and transparent in their AI use, employees across the organization can build their meta-cognition skills and shift their perspective of AI from one of low-intent to one of high-intent.

 

 

AI will continue reshaping work, but the value coming from these tools will depend on the human habits that we build behind the tools. Employees who think critically about their AI use can generate better outcomes, help their team to develop stronger practices, and cultivate an AI culture of quality, security, and trust. Adoption is the first step, but intentionality is the multiplier.

 

Stay tuned for our March Research Drop to keep up with what the Microsoft People Science team is learning!

 

This month’s Research Drop analyzed 1,800 global employees from the Microsoft People Science Agentic Teaming & Trust Survey from July 2025.

 

1Liebscher, A., Rapuano, K., & Hancock, J. T. (September 22, 2025). AI-Generated “Workslop” is Destroying Productivity. Harvard Business Review.

2Schraw, G., & Dennison, R. S. (1994). Assessing metacognitive awareness. Contemporary Educational Psychology, 19(4), 460-475.

3Niederhoffer, K., Robichaux, A. & Hancock, J. T. (January 16, 2026). Why People Create AI “Workslop” and How to Stop It. Harvard Business Review.

4KPMG. (2025). Trust, attitudes, and use of artificial intelligence: A global study 2025.

Updated Feb 10, 2026
Version 2.0
No CommentsBe the first to comment