Many AI agents are designed to automate tasks and recommend actions, with employees remaining accountable for decisions. A level of trust with agentic AI is therefore needed to feel confident that the AI agent’s choices will be the right ones. We’re starting to see that exposure to agents is related to trust in agents, as organizations further along in their agentic AI transformation report higher levels of trust in AI agents than those still in the exploration phase1. The question becomes whether this trust is built off critical thinking or blind trust. And we found that 60% of employees said they are confident enough in AI output that they don’t need to check its accuracy. This brings concerns about overreliance, especially relevant as 56% of leaders report concerns with data quality of AI agents2.
Chapter 3 of our 2025 Agentic Teaming & Trust Research Report dives into building trust on a foundation of meta-cognition, where employees reflect on their usage of AI, learning when and how to best leverage the technology. In this chapter, we explore how to combat overreliance of AI through trust, leaning on key organizational factors, such as leadership role modeling, reliable systems, and peer learning/advocacy.
Cultivating enduring trust over blind trust is a dynamic process3, starting from the employee calculating that using AI is worth any risk, then learning to trust the competence and skill of AI, and finally creating a sustained relationship with AI built on trust and history. This process creates a level of trust where employees reflect on when and how to rely on AI systems, including verifying outputs and understanding limitations.
One way to impact trust is through leader involvement. When leaders are deeply engaged in the implementation of agentic AI and are transparent about their experience with adopting agents, it can build employee confidence and critical thinking skills when they go to adopt agentic AI. We see up to a 30-percentage point difference in how much employees not only trust agentic AI, but also how much they reflect on their AI usage behaviors between employees who say their leaders role model effective AI use and those whose leaders don’t.
And once trust is built, it creates impact contagion where a culture of role modeling, openness, and experimentation leads to greater value from integrating agentic AI. Employees with high trust in AI agents are 1.4x as likely to report realized individual and team value of agentic AI, such as improved personal work quality and enhanced team-based knowledge sharing.
As organizations move from exploration to integration, the difference between blind trust and enduring trust becomes critical. By fostering environments where employees reflect on their AI usage and learn from each other, organizations can unlock a reinforcing cycle of trust, impact, and innovation. Agentic AI thrives not just on technical capability, but on the human capacity to think critically, share transparently, and grow together.
Want to explore the full story?
Download the PDF - Chapter 3: Trust 360
Catch up on previous chapters from this report:
Download the PDF - Chapter 1: Ready for Agents
Download the PDF - Chapter 2: The Multiplier Effect
The Agentic Teaming & Trust Study was conducted by the Microsoft People Science team utilizing an Online Panel Vendor, commissioned by Microsoft, with 1,800 full-time employees across nine markets between June 11, 2025 and July 7, 2025. This survey was 12 minutes in length and conducted online. Global results have been aggregated across all responses to provide a total or average. Each sample was representative of business leaders across regions, ages, and industries (i.e., Construction, Financial & Professional Services, Retail, Food, & Beverage, Healthcare, Media & Entertainment, Technology, Transportation, Travel, & Hospitality). Each sample included specific parameters on company size (i.e., organizations with 1,000+ employees) and job level (i.e., business leaders/business decision makers, those in mid- to upper job levels such as C-level executive, VP or director, Manager). The overall sampling error rate is 2.31 percent at the 95 percent level of confidence. Markets surveyed include Brazil, China, France, Germany, India, Japan, Mexico, United Kingdom, and the United States. Findings represent aggregated responses and may not reflect all organizations or industries.
References